-
Notifications
You must be signed in to change notification settings - Fork 1
guidingTask
Concerning the semantic and the topology, data structures are stored in an ontology: https://github.com/LAAS-HRI/semantic_route_description/blob/master/files/place_description.owl there you will find how would be described the concept of place, path and region.
and the elements are stored in the following ontology: https://github.com/LAAS-HRI/semantic_route_description/blob/master/files/adream_mall.owl there you will find the description of each places (incl. shop), paths, and regions.
e.g. door_h20 description
(door_h20 isA door)
(door_h20 isAlong os_exp_1)
(door_h20 isAlong os_hall)
(door_h20 hasAtRight marco_polo)
(door_h20 isAtLeftOfPath gf_walkway_1)
(door_h20 isIn adream_experiment_room)
(door_h20 isIn adream_hall)
(door_h20 lang « door »)
human understandable + human readable + synonym
door description
(door subClassOf interface)
interface description
(interface subClassOf place)
(interface disjointWith shop)
Basic entities (from The spatial semantic hierarchy, Benjamin Kuipers (1999) )
- Place : dimension 0 (ex: a shop or a door)
- Path : dimension 1 (ex: a corridor)
- Region : dimension 2 (ex: a hall)
- Region: One region is connected to another by at least one interface.
- Interface: An interface connects only two regions. Examples of interface: door, stairs, elevator, escalator
- Corridor:
- corridors have at least:
- two edges
- a beginning (begin)
- an end
- an edge and an end / beginning can be at the same place, they are then described as being the same
- two corridors intersect with an intersection that belongs to both corridors
- corridors have at least:
- Open space (openspace) is defined as a particular type of path. They can be seen as a "potato-shaped" describing the outline of the open space. It materializes the possibility of turning around the room of the glance without having to go through a definite path to go to one of its points. Open space has no end, no beginning, no end. Open spaces may have intersections with other open spaces or corridors.
- Places (shops, intersections, beginning, end, ...) are defined as being along one or more paths.
Persona type:
- Concerning the person the robot will interact with, we propose to define different types: the average person, the person with a stroller (disabled), the elderly person, the young, the person who knows the place (knowing), the person who does not know the place (notKnowing).
- These will be described in the semantic route description as: lambda, knowing, notKnowing, disabled, disabled_knowing, disabled_, notKnowing, young, young_knowing, young_ notKnowing, old, old_knowing, old_ notKnowing.
- This information will be used to setup 5 criteria: accessibility, saliency, security, comfort, explicability. Each criterion also has its opposite. For now, only these criteria are described in the ontology https://github.com/LAAS-HRI/semantic_route_description/blob/master/files/route_cost.owl We plan to make the link between criteria and person types also in the ontology in the future.
- Previous Knowledge (not yet implemented, to be discussed): We propose to be able to take into account an array of Places that would describe information that could have been gathered by the dialog concerning the person knowledge about the environment.
Here is a list of functions or information that should be made available all along the interaction:
-
DIALOG: In addition to the management of the dialog itself, it would be interesting to have a measure of the interaction quality (does the human « play the game » or not?) It has to be decide if this « interaction quality » measurement would come from only from the dialog or/and other components => have to integrate it to the actual robot, but there are features that can help Christian will send a paper
-
the robot should listen all the time to the human reaction e.g. if the human says it's ok or ok i've seen or no, i do not see it...
-
People categorization around the robot: passer-by, spectator, interaction candidate
The robot chooses a person to interact with. Two choices: the person is well placed and the robot begin the interaction ; the robot has to make a move to approach the person.
Approach( ?? check parameters)
« Hello robot, what’s your name? »
« i’m ready »
The human asks something to the robot. Here, we will assume it will ask for route description
DIALOG management
« I want to buy shoes »
« I want to find a restaurant »
OUTPUTS:
String shop_name (or place_name)
String persona_type
Array of string (tobedefined) previous_knowledge
RESULT: launch of the guiding action server
TODO: 2) the dialog gives the shop name and ontologenius/individual find "shop name" will give the name of the shop in the system, there could be several answers, for now the first one is taken, it could be interesting to do a getrouteregion at this step to find the one that is easy to explain
This service will give you a route between two regions. https://github.com/LAAS-HRI/semantic_route_description It has to be noticed that this Route is at region level, not at place level. The idea is to be able to give a first high level description of the path.
string from
string to
string persona
bool signpost
---
Route[] routes
float32[] costs
string[] goals]
$ rosservice call /semantic_route_description/get_route_region "{from_: 'robot_infodesk', to: 'burger_king', persona: 'lambda', signpost: true}"
routes:
- route: [adream_experiment_room]
- route: [adream_experiment_room, door_C, outside, ff_door_1, first_floor]
- route: [adream_experiment_room, door_D, outside, ff_door_1, first_floor]
- route: [adream_experiment_room, door_E, outside, ff_door_1, first_floor]
- route: [adream_experiment_room, door_h20, adream_hall, elevator_1, first_floor]
- route: [adream_experiment_room, door_h20, adream_hall, stairs_1, first_floor]
costs: [1.0, 7.2, 7.2, 7.2, 4.16, 6.0]
goals: [burger_king_signpost_1, burger_king, burger_king, burger_king, burger_king, burger_king]
It would be possible to choose the best route given supplementary criterion, e.g. geometric one (this route is shorter than the other one). For now, we choose the one with the lowest cost from the service get_route_region.
TODO:
- the supervision system needs to choose one of the route and this route will be used as input for the route_verbalisation
- in a second step, take signs into account (there are available in the route)
We have for now, a very simple system to verbalize this route.
If we are already in the first region of the route: « The shop_name is nearby, would you like me to show you the shop? »
If we are not in the same region. We call RouteVerbalization(robot_place, shop_name). « It’s not here. » RouteVerbalization result « Would you like me to show you the way to go to the shop? »
TODO: supervision system: new service available for RouteVerbalization where the route is an input parameter
INPUTS: [String shop_to_go (optional), String first_interface_in_the_route (optional), tf person_frame]
OUTPUTS: [6DPos robot_position, 6DPos human_position, Array places_to_point]
with places_to_point will be the list of visible elements regarding the inputs (shop_to_go, first_interface_in_the_route)
if (shop_to_go and first_interface_in_the_route)
=> [6DPos robot_position, 6DPos human_position, Array places_to_point]
with:
places_to_point[0]:
how to point first_interface_in_the_route if possible or empty
places_to_point[1]:
if places_to_point[0]!=void and cost(TBD)<threshold
how to point shop_to_go if possible or empty
if (shop_to_go or first_interface_in_the_route)
=> [6DPos robot_position, 6DPos human_position, Array places_to_point]
with:
places_to_point[0]:
how to point (shot_to_go or first_interface_in_the_route) if possible or empty
parameters:
visibility treshold: will define from which size we consider an object visible in the viewpoint
max distance: will define the maximum distance that the robot could do from its starting point
Compute how far is the actual position of the human from the human_position given by the pointing planner.
If it is less than a given treshold, we consider that he should not move.
If it is higher, we consider that the human should move and we call PointAtHumanFuturePlace.
INPUTS: [String shop_to_go (optional), String first_interface_in_the_route (optional), tf person_frame]
INPUTS: [6DPos actual_human_position, 6DPos target_human_position]
OUTPUTS: [Boolean YES/NO]
parameters:
treshold
Pending questions: There is some special case that should be handled:
- what if the
target_human_positionis theactual_robot_position?-> not yet done, we can think to operate differently in that case, making the robot move before the human... => for now, the robot asks the human to go to its place
- "I need you to make a few steps, you will see better, what i am about to show you. Can you go there ?"
- Pointing
PointAt(6DPos target_human_position)plus we follow the eye-gaze (need to check what is really done) of the human, if we lost it the robot look at6DPos target_human_position
If the human reaches the ``6DPos target_human_position``` -> SUCCESS -> the human is well placed
Else
```PointAt(6DPos target_human_position)``` (+ add a temporisation ?)
- ajouter "have you see you see you.."
Pending questions: Does the robot need to turn to point? Is there any case where it is possible? Or is it not possible?
Test to do: If the human does not reach the position, test that the system do answer differently. the robot waits and then do the task
The robot moves to the position given by the pointing planner
MoveToPos
INPUTS: 6DPos target_robot_position
OUTPUTS:
SUCCESS -> we continue
FAILED -> we stop
Test to do: do a case where the robot has to move more
TODO: 2) compute another time the position of the robot after the human move (need a parameter to be able to say that the human should not move anymore) 2 cases : whether the human has to move to a "correct" position, either it is a completely wrong one
At this step, both robot and human are well placed in the environment to continue the interaction. We will now select the landmark that will be point.
SelectLandmark
INPUTS:
OUTPUTS:
This function will point in the direction of a point that is not visible. The robot does not turn its head in this case.
PointNotVisible
INPUTS: 6DPos target_position
OUTPUTS:
if ((isVisible Shop) and (Shop is in the actual region))
"look the ShopName is here" plus the robot looks where it will point
else
if ((isVisible Shop) and (Shop is not in the actual region))
"look to go to ShopName there,"
if (there is an interface)
"you need to go through the InterfaceName here"
LookAtLandmark
INPUTS:
OUTPUTS:
We point the shop or the interface. We look at the human and ask if he has seen. If no, we ask if he wants that we point another time (go back to LookAt)
PointAtLandmark
INPUTS: 6DPos landmark_position
OUTPUTS:
Test to do: Check if the system reacts if the robot detects that the human has seen
Pending requests If the human says: "oh yes i've seen it!" or "it's ok" before the robot asks. If the human says: "no i do not see it!" whereas the robot has detected that he has seen it.
TODO:
- Check if the robot need to turn to do the pointing and how to do that (concurrent access between point_at and look_at)
Given the visibility of the shop, what will be done is the following:
if (notVisible shop) and (shop is not in the actual_region)
pointNotVisible(shop)
lookAtLandmark(first interface)
pointAtLandmark(first interface)
if (Visible shop) and (shop is in the actual_region)
lookAtLandmark(shop)
pointAtLandmark(shop)
if (Visible shop) and (shop is not in the actual_region)
lookAtLandmark(shop)
pointAt(shop)
lookAtLandmark(first interface)
pointAt(first interface)
There are 4 possibilities for pointing at the supervision level:
pointAt an invisible object (we do not look at it)
pointAt an interface
pointAt a shop (or a landmark attached to a shop ?)
ask to the human to move and then point
This function will give a more detailed route based on places.
$ rosservice call /semantic_route_description/get_route "{from_: 'robot_infodesk', to: 'burger_king', persona: 'lambda'}"
routes:
- route: [robot_infodesk, os_exp_1, gf_ww1_os1_intersection, gf_walkway_1, door_C, outside,
ff_door_1, ff_corridor_4, burger_king]
- route: [robot_infodesk, os_exp_1, gf_ww2_os1_intersection, gf_walkway_2, door_D, outside,
ff_door_1, ff_corridor_4, burger_king]
- route: [robot_infodesk, os_exp_1, door_E, outside, ff_door_1, ff_corridor_4, burger_king]
- route: [robot_infodesk, os_exp_1, door_h20, os_hall, elevator_1, ff_corridor_3, ff_c34_intersection,
ff_corridor_4, burger_king]
- route: [robot_infodesk, os_exp_1, door_h20, os_hall, stairs_1, ff_corridor_3, ff_c34_intersection,
ff_corridor_4, burger_king]
costs: [12.96, 12.96, 10.08, 7.5, 10.8]
goals: [burger_king, burger_king, burger_king, burger_king, burger_king]
Get the region where the shop (the place) is and then Get all the shops for a region
Perspective taking saliency
-
one case where only the robot has to move (it is the pointing planner that is able to compute that)
-
one case where the human move a lot and the robot waits
-
two test-cases when the robot shows to the human where he has to go:
- the human is in the robot view point
- the human position is not in the robot view point (so the robot turn to see if the human has reached its position)
-
for the same landmark, show that the answer of the system is different and adaptable
-
pointing planner:
- check pointing planner parameters and test parameters change
- perhaps we should tune the pointing planner to get more distant positions for the robot and for the human
-
the perspective taking would be difficult to use to detect if the human has seen or not, check when it works and when it does not work and think how we use this information then
-
pointing "gesture" of the robot:
- not very good for now
- check if the pointing position would be better if we show a position higher than the one we give
- check to point at the floor
-
navigation:
- test the wheels
- sometimes the robot stops or is slow to find a solution or do not find a position: test with the motion capture system to know if it is the perception that blocks the system or if it is a real default of the software
-
supervision:
- would be good to call back the pointing planner when the human is placed in the environment (to catch if the human is not exactly where we tell him to go)
- the idea is to check that we are in a correct configuration when we have to do something (before pointing also for example)
-
Vision:
- test the position we have from the vision system and the position we can have from the motion capture
- do a test to show how the system works when the robot moves and when the robot's head move
For each test, check from several starting positions.
-
Test 1: the robot shows a corridor
-
Test 2: the robot shows something that is far but that can be seen through window
-
Test 3: the goal is in my field of view but i do not see it for now (eg: Burger King)
-
Test 4: the goal is behind me