Project Suggestions


Classifying fixed and moveable obstacles

  • Some obstacles can move while others are fixed. A smart navigation system should be able to differentiate between the two, enabling the robot to respond correctly.
  • An example of a moving obstacle is a person walking. In these situations the robot can wait.
  • Fixed obstacles include walls, shelves and large furniture. The robot must replan a path.
  • If the robot can recognise movable obstacles, like chairs or toys, it can try to move them or request a person to move them.

Obstacle avoidance

  • Improve navigation around the immediate area to avoid obstacles that may be hard to detect in a precomputed global map. These obstacles are usually dynamic and not permanent fixtures of the environment so they should not be included in the map.
  • In the case of a domestic robot, you want to ensure safe navigation to avoid breaking things or injuring people.
  • These obstacles may include unexpected people walking in front of the robot, small objects below the lidar or large surfaces like a dining table or TV, above the lidar.
  • The robot should safely navigate around these obstacles, and stop where necessary.

Learning to recognise a person

  • In many cases, a domestic robot will deal with several people. The robot can provide personalised service if it can identify individuals.
  • Dealing with unknown people is helpful in situations where the robot is placed in an unfamiliar environment. For example, when a robot is tasked to greet Claude at reception, it can go to that room and infer or ask who Claude is.
  • Another scenario is a robot server in a house party. Multiple people can request drinks from the robot, and the robot will get the drinks and find the correct person to hand them their drinks.
  • In a crowd, the robot must be able to isolate individuals.

Person Following

  • The tracking (vision) needs to locate the correct moving person in a crowd, even if they are momentarily obscured from view. The added challenge is that the robot (and therefore camera) itself is non-stationary.
  • The driving needs to keep close to the person being tracked whilst avoiding smaller obstacles or when a person goes around a corner.

Gesture recognition

  • Gestures and body language add a lot of information in human-robot interaction.
  • Develop a vision system allowing you to point to objects, wave for a robot to come over or shake someone’s hand.

VR/AR visualisation

  • Part of improving human-robot interaction is having the ability to understand what the robot is seeing.
  • Being able to visualise this in the real world through an AR or VR headset with the integration of controls can help control, diagnose or even teach the robot new things.

Background noise filtering

  • Speech is one of the main sources of input to a robot and the most natural way for us to communicate.
  • Therefore, it is important that the robot is able to isolate a person talking to it, remove background noise, and even locate (localise) that person.
  • Techniques in signal processing can be used, so a background in that field is recommended.

Resource created Thursday 26 July 2018, 03:25:52 PM, last modified Thursday 27 September 2018, 11:32:29 AM.


Back to top

COMP3431/COMP9431 18s2 (Robot Software Architectures) is powered by WebCMS3
CRICOS Provider No. 00098G