Self-funded PhD opportunities

Multi-modal Intelligent Sensing and Recognition for Human-Robot Interaction/Collaboration

  • Application end date: Saturday 1 July 2017
  • Funding Availability: Self-funded PhD students only
  • Department: School of Computing
  • PhD Supervisor: Zhaojie Ju, Chenguang Yang (Swansea University) and Mohammed Bader

In the human-robot interaction/collaboration, the robot is supposed to be able to detect, perceive and understand corresponding human motions in the environment to interact, co-operate, imitate or learn in an intelligent manner. Sensory information of both human motions and the environment is captured by various types of sensors such as cameras, markers, accelerometers, tactile sensors, etc [1]. Research applications of human motion analysis in human-robot interactions/collaborations include programming by demonstration, imitation, tele-operation, activity or context recognition and humanoid design [2]. In addition, the extraction of meaningful information about the environment through perceptual systems also plays a key role in scene representation and recognition to future make the robot interact with human in a more natural way [7]. The aim of scene representation for HRI is to describe the way in which human and robot tend to interact around a scene and to generate a representation tied to geography, indicating which types of motions might happen in which part of the scene. It can enable a robot to respond efficiently to user commands, which refer to spatial locations, object features or object labels without re-performing a visual search each time. The objectives of this project are:

1. To develop a multimodal-sensing platform for human-robot interaction and collaboration, using various types of sensors such as depth cameras, markers, accelerometers, tactile sensors, force sensors, bio-signal sensors, etc. to capture both human motions and the operation environment.

2. To investigate a more robust and less noisy representation of human action features, including the local and globe features, incorporating a variety of uncertainties, e.g., quality of images, individual action habits, different environments, etc.

3. To investigate an advanced motion analysis framework including hierarchical data fusion strategies and off-the-shelf probabilistic recognition algorithms, to synchronise and fuse the sensory information for the real-time analysis and automatic recognition of the human action with satisfactory accuracy and reliable fusion results. The priority is given to balancing the effectiveness and efficiency of the system.

4. We will investigate effective methods for scene representation using dynamic neural field including transient detectors, temporal variation model, etc. The scene representation will be incorporated into the motion analysis framework to achieve a more effective and stable system.

How to apply:

To apply or make an enquiry, please visit postgraduate research: Computing and Creative Technologies

Applications should use our standard application forms and follow the instructions given under the ‘Research Degrees’ heading on the following webpage:

http://www.port.ac.uk/application-fees-and-funding/applying-postgraduate/#rd

When applying please note the project code CCTS3390217

Funding Notes:

Home/EU applicants only. Please use the online application form and state the project code and studentship title in the personal statement section.

An appropriate first or upper second class honours degree of any United Kingdom university or a recognised equivalent non-UK degree of the same standard honours degree or equivalent in a relevant subject or a master’s degree in an appropriate subject. Exceptionally, equivalent professional experience and/or qualifications will be considered.

References to recent published articles:

1. Ju Z. and Liu H. Fuzzy Gaussian Mixture Models, Pattern Recognition, 45(3):1146-1158, 2012.

2. Ju Z, Liu H. Human Hand Motion Analysis With Multisensory Information [J]. IEEE/ASME Transactions on Mechatronics, 19(2):456-466, 2014.