Funding

Self-funded PhD students only

Project code

CCTS4480219

School

School of Computing

Start dates

and

Closing date

Applications accepted all year round

Applications are invited for a self-funded, 3-year full-time or 6-year part-time PhD project, to commence in October 2019 or February 2020.

The PhD will be based in the School of Computing and will be supervised by Dr Zhaojie Ju and Dr Chenguang Yang (Swansea University).

In the human-robot interaction/collaboration, the robot is supposed to be able to detect, perceive and understand corresponding human motions in the environment to interact, co-operate, imitate or learn in an intelligent manner.

Sensory information of both human motions and the environment is captured by various types of sensors such as cameras, markers, accelerometers, and tactile sensors. Research applications of human motion analysis in human-robot interactions/collaborations include programming by demonstration, imitation, tele-operation, activity or context recognition and humanoid design. In addition, the extraction of meaningful information about the environment through perceptual systems also plays a key role in scene representation and recognition to future make the robot interact with human in a more natural way.

The aim of scene representation for HRI is to describe the way in which human and robot tend to interact around a scene and to generate a representation tied to geography, indicating which types of motions might happen in which part of the scene. It can enable a robot to respond efficiently to user commands, which refer to spatial locations, object features or object labels without re-performing a visual search each time.

We will investigate effective methods for scene representation using dynamic neural field including transient detectors, temporal variation model, etc. The scene representation will be incorporated into the motion analysis framework to achieve a more effective and stable system.


The objectives of the project are:

  • To develop a multimodal-sensing platform for human-robot interaction and collaboration, using various types of sensors such as depth cameras, markers, accelerometers, tactile sensors, force sensors, bio-signal sensors, etc. to capture both human motions and the operation environment.
  • To investigate a more robust and less noisy representation of human action features, including the local and globe features, incorporating a variety of uncertainties, e.g., quality of images, individual action habits, different environments, etc.
  • To investigate an advanced motion analysis framework including hierarchical data fusion strategies and off-the-shelf probabilistic recognition algorithms, to synchronise and fuse the sensory information for the real-time analysis and automatic recognition of the human action with satisfactory accuracy and reliable fusion results. The priority is given to balancing the effectiveness and efficiency of the system.

Funding 

PhD full-time and part-time courses are eligible for the Government Doctoral Loan (UK and EU students only).


2019/2020 entry

Home/EU/CI full-time students: £4,327 p/a*

Home/EU/CI part-time students: £2,164 p/a*

International full-time students: £15,900 p/a*

International part-time students: £7,950 p/a*

*Fees are subject to annual increase

 

By Publication Fees 2019/2020

Members of staff: £1,610 p/a*

External candidates: £4,327 p/a*

*Fees are subject to annual increase

How to apply

We’d encourage you to contact Dr Zhaojie Ju (Zhaojie.ju@port.ac.uk) to discuss your interest before you apply, quoting the project code CCTS4480219.

When you're ready to apply, you can use our online application form and select ‘Computing and Creative Technologies’ as the subject area. Make sure you submit a personal statement, proof of your degrees and grades, details of two referees, proof of your English language proficiency and an up-to-date CV.  

Our How to Apply page also offers further guidance on the PhD application process.

This site uses cookies. Click here to view our cookie policy message.

Accept and close