Integrated AI framework for grasp and manipulation of intelligent robots
Fully funded (UK/EU/International students)
School of Computing
23 February 2020
Candidates applying for this project may be eligible to compete for one of a small number of bursaries available; these cover tuition fees at the UK/EU rate for three years and a stipend in line with the UKRI rate (£15,009 for 2019/2020). Bursary recipients may be eligible for £1,500 p.a. for project costs/consumables.
The work on this project will involve:
- Deep Learning (CNN-based) for object recognition and affordance detection
- Object modelling and 3D point cloud segmentation
- Semantic knowledge modelling
The project pioneers an AI framework that integrates robot learning, scene understanding, task description and knowledge modelling for intelligent robot manipulation. It is in accordance with our research of the EU Horizon 2020 project (NEANIUS 863448) under the theme of integration robots into the European Open Science Cloud for planetary and underwater services, but it pushes the research frontier towards the direction of intelligent manipulation – one of the main challenges of intelligent robot research.
The research outcomes from this project would deepen our understanding of human perceptual-motor mechanisms and bring substantial research opportunities in the coming years in forging the emerging technology of immersive intelligent robots in human life and society. The work will be undertaken in collaboration with Aix-Marseille University, University of Bremen, University of Milan-Bicocca, National and Kapodistrian University of Athens and other European partner institutions.
Intelligent manipulation is the problem of devising strategies and plans for robots to negotiate with objects in the given task contexts. Such robots need to be task and environment aware and be able to devise the optimal manipulation policies in situ. Research has evidenced that intelligent manipulation is a complex AI problem that involves robot learning, knowledge acquisition and modelling, object recognition and scene understanding and more.
Recent research has witnessed progresses in the isolated areas such as affordance learning, grasp planning and manipulation-knowledge modelling. However, there is still no breakthrough in developing a unified AI model representing the complex nature of the problem and reflecting the confluence of the various mechanisms.
This project will study the intrinsic mechanisms and knowledge models underpinning the intelligent manipulation and aims at establishing a framework that integrates robot learning, object modelling, task descriptions, and policy generation within a unified model of manipulation planning. The model will address the issues of task- and context-awared policy making through multimodal learning, knowledge sharing and knowledge-driven task learning.
You'll need an upper second class honours degree from an internationally recognised university or a Master’s degree in an appropriate subject. In exceptional cases, we may consider equivalent professional experience and/or qualifications. English language proficiency at a minimum of IELTS band 6.5 with no component score below 6.0.
Ideally, you should have a degree in the disciplines of artificial intelligence, computing, information technology or engineering. Experience in deep learning, computer vision or semantic modelling would be advantageous.
How to apply
We encourage you to contact Dr Jiacheng Tan at email@example.com to discuss your interest before you apply, quoting the project code.
When you are ready to apply, you can use our online application form. Make sure you submit a personal statement, proof of your degrees and grades, details of two referees, proof of your English language proficiency and an up-to-date CV. An extended statement as to how you might address the proposal would be welcomed.
Our ‘How to Apply’ page offers further guidance on the PhD application process.
If you want to be considered for this funded PhD opportunity you must quote project code COMP4550220 when applying.