There is a growing demand for technology that captures and analyses facial expression, driven by the computer games, entertainment, and security industries, and augmented and virtual reality applications. We're researching new technology to improve the efficiency and reduce the cost of generating realistic facial images. Our work is helping health professionals improve how they monitor treatment of patients with Parkinson's disease and facial palsy, and helping people with autism improve their communication skills.
Our work has been funded by Innovate UK, Emteq, the Engineering and Physical Sciences Research Council (ESPRC), and the Royal Society of Engineering – and helps machines understand 2D images and use them to reconstruct 3D scenes. This information is used by robots to avoid obstacles and perform tasks without getting stuck, such as robots assisting people with limited mobility in healthcare settings.
Our research is helping computers, mobile technology and cameras understand and visualise human movement and emotion. We analyse body and facial movements using electromyography, whereby the electrical activity of muscle tissue is monitored using electrodes and represented on a screen or through sound. We also analyse cognitive signals by using electroencephalography, which records electrical brain activity with electrodes placed on the scalp.
Our research is also developing knowledge and technology in vision, artificial intelligence, machine learning and computer graphics, and we've published papers in peer-reviewed journals and conferences including Computer Graphics, Neurocomputing, SIGGRAPH, and IEEE Transactions on Human-Machine Systems and IEEE Transactions on Industrial Electronics.
Our research covers the following topics
- Facial animation
- 4D facial reconstruction and synthesis
- Photorealistic facial expression
- Real-time facial tracking
- Photorealistic textures
- 3D reconstruction and modelling
- Physically based rendering (PBR)
- Facial texture synthesis
- Image-based rendering
- Crowd behaviour analysis
- Image based saliency detection
We use quantitative research methods such as experiments, computational regression or classification models to build relationships between variables of interest including facial expression intensity or category, probability theory and statistical analysis. This leads to better human to machine interaction, interpretation and visualisation. The results are better images, created more efficiently, at less cost.
We apply the Uncanny Valley theory, where images are close to real, but not quite enough for the user to fully accept them as real. This achieves realistic facial animation that evokes positive emotional responses.
We use equipment including virtual reality head-mounted displays (VR HMDs), electromyographic (EMG) sensors, electroencephalographic (EEG) sensors, depth cameras, a driving simulator and advanced motion capture facilities.
Most of our academic staff researching visual computing have professional experience working in emotion-sensing technology, animation production and the film industry. Paul Charisse has worked in animation of high profile films, including animating Gollum in The Lord of Rings.
- We're developing affordable, consumer-grade cameras that construct 3D models from 2D photographs in collaboration with firms including Emteq Ltd, who pioneer the development of emotion-sensing wearable technology. This technology could help patients with facial palsy send their 3D facial reconstruction to their consultant and avoid frequent hospital visits.
- We're working with Emteq to develop 'smart eyeglasses' to monitor body, mood and balance in people with Parkinson's disease. The glasses have sensors in the frame that capture tiny facial movements at 1,000 frames per second. Remote measurements are sent from the glasses to healthcare professionals.
- We're working with the Universities of Cambridge and Nottingham to develop a prototype Virtual Reality system for people with autism to improve their social skills in a safe environment.
- Sensor-enabled ambulatory monitoring of physical activity, funded by Innovate UK
- 3D facial reconstruction, funded by the Royal Academy of Engineering
- Multimodal data-based mental workload and stress assessment for assistive brain-computer interface, funded by the Royal Academy of Engineering
Discover our areas of expertise
We're examining the theory, psychology and development of video games and contributing to the design, development and release of games.
We're exploring how virtual reality (VR) can improve patient's physical and psychological rehabilitation, and developing VR simulations for a range healthcare applications.
We're investigating music and sound and creating new tools to enhance performance and creativity.
We're investigating the impact and application of digital technology in the cultural and heritage sectors to improve visitor experiences and conserve cultural and historical sites.
Interested in a PhD in Digital & Creative Technologies?
Browse our postgraduate research degrees – including PhDs and MPhils – at our Digital & Creative Technologies postgraduate research degrees page.