-
Notifications
You must be signed in to change notification settings - Fork 0
introduction_wiki
Rodrigo Serra edited this page Nov 28, 2023
·
1 revision
Our robot is under constant development, however as for 14/02/2018 our robot has the current skills:
- pick: trigger perception (object recognition) and grasp the desired object with the manipulator
- recognize speech: convert incoming audio (from the microphone) into text
- make faces: It can get happy, sad, angry, etc...
- speech synthesizer: It can speak out sentences using a robot voice
- answer questions: it has a internal dictionary with questions that can answer
- follow people: it triggers people detection algorithm, performs tracking and calls navigation to follow a person
- detect people: if a person is in front of the robot, it can detect its pose and outputs a boolean saying if there is someone or not in front of him
- object recognition: 2D based and 3D based, allows for objects to be trained in advanced and be recognized by the robot along with its pose relative to its own body
- navigate between waypoints autonomously
- relative base movements: move forward, backwards, sideways, rotate left, right a fixed distance/angle set as a parameter
- detect between big and small objects: by counting the number of points in a cloud and given a certain threshold
- move neck: it has a motor and can rotate its head a certain angle to the right/left
- natural language understanding: you can speak to the robot in a natural way and still the robot can identify what is your intention