This project is funded by:
In 4th industrial revolution, intelligent robots will be deployed alongside human in smart manufacturing line. These robots will work as co-workers and are called Cobots. Cobots may be employed as assistants to human workers requiring improved human-robot interaction. Current smart manufacturing environments are complex, and safer human-robot interaction in a smart manufacturing line is challenging due to the fact that human pose/posture changes over time. For safe operation and human-robot interaction, a Cobot needs exact estimation of the dynamic human pose/posture. Vision-based pose estimation is widely employed. Deep learning models are popular for such applications. Deep learning models require large-scale labelled data for training and fine tuning involving extensive human efforts for annotation of data. Self-supervised learning is an approach to train deep learning models with unlabelled data, which reduces model development costs.
This project aims at developing a deep learning model for dynamic human pose estimation in manufacturing line. The deep learning model is the trained employing self-supervised learning.
Applicants should hold, or expect to obtain, a First or Upper Second Class Honours Degree in a subject relevant to the proposed area of study.
We may also consider applications from those who hold equivalent qualifications, for example, a Lower Second Class Honours Degree plus a Master’s Degree with Distinction.
In exceptional circumstances, the University may consider a portfolio of evidence from applicants who have appropriate professional experience which is equivalent to the learning outcomes of an Honours degree in lieu of academic qualifications.
If the University receives a large number of applicants for the project, the following desirable criteria may be applied to shortlist applicants for interview.
The University is an equal opportunities employer and welcomes applicants from all sections of the community, particularly from those with disabilities.
Appointment will be made on merit.
This project is funded by:
Our fully funded PhD scholarships will cover tuition fees and provide a maintenance allowance of £19,237 (tbc) per annum for three years (subject to satisfactory academic performance). A Research Training Support Grant (RTSG) of £900 per annum is also available.
These scholarships, funded via the Department for the Economy (DfE) and the Vice Chancellor’s Research Scholarships (VCRS), are open to applicants worldwide, regardless of residency or domicile.
Applicants who already hold a doctoral degree or who have been registered on a programme of research leading to the award of a doctoral degree on a full-time basis for more than one year (or part-time equivalent) are NOT eligible to apply for an award.
Due consideration should be given to financing your studies.
Minyoung Chung , Jingyu Lee , Minkyung Lee , Jeongjin Lee, Yeong-Gil Shina, Deeply self-supervised contour embedded neural network applied to liver segmentation, Computer Methods and Programs in Biomedicine, Vol 192, 2020, 105447
Sanat Ramesh, Vinkle Srivastav, Deepak Alapatt, Tong Yu, Aditya Murali, Luca Sestini, Chinedu Innocent Nwoye, Idris Hamoud, Saurav Sharma, Antoine Fleurentin, Georgios Exarchakis, Alexandros Karargyris, Nicolas Padoy, Dissecting self-supervised learning methods for surgical computer vision, Medical Image Analysis, Vol 88, 2023, 102844
Chengang Dong, and Guodong Du An enhanced real‑time human pose estimation method based on modified YOLOv8 framework, Scientific Reports, 2024, 14:8012.
Luo, Z. et al. Rethinking the heatmap regression for bottom-up human pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13264–13273 (2021).
Submission deadline
Monday 24 February 2025
04:00PM
Interview Date
3 April 2025
Preferred student start date
15 September 2025
Telephone
Contact by phone
Email
Contact by email