Audio-visual cues for cobotics

Apply and key information  

Summary

Smart manufacturing, involving robotics, automation and humans, has become a key focus globally. With this radical change in manufacturing comes the need for humans and robots to co-exist in shared environments. Whilst we strive to integrate intelligent robotic systems within manufacturing environments, this should be accompanied by sensory driven efficiencies of human and machine interactions.

In 4th industrial revolution, robots will be commonplace alongside humans in industry to work as co-workers on manufacturing/assembly line where robots may be employed as assistants to human workers. These co-worker robots (cobots) will assist the human for bringing tools and parts to when asked. For example, when a human asks for a tool, e.g., hammer, the cobot will bring the tool to the human. This kind of assistive behaviour will require the cobot to recognise the physical object, e.g., hammer, from an audio speech uttered by the human. That means the cobot needs to be trained on the audio speech and the images of the physical object. The two modalities, i.e., audio and visual, have two fundamentally different statistical properties. There are some cues exist only in certain modalities. Therefore, combining these two modalities is non-trivial and challenging. This will require novel modelling and learning strategies. Deep learning is capable of training models with multiple modalities and can find the relationship between different data modalities such as image, audio, and text.  This project aims at multimodal learning of objects from more than one modality suitably audio and visual and define the relationship between modalities to comprehend the context of tasks in a manufacturing environment.

Please note that a research proposal is NOT required for this project.

Essential criteria

Applicants should hold, or expect to obtain, a First or Upper Second Class Honours Degree in a subject relevant to the proposed area of study.

We may also consider applications from those who hold equivalent qualifications, for example, a Lower Second Class Honours Degree plus a Master’s Degree with Distinction.

In exceptional circumstances, the University may consider a portfolio of evidence from applicants who have appropriate professional experience which is equivalent to the learning outcomes of an Honours degree in lieu of academic qualifications.

  • Experience using research methods or other approaches relevant to the subject domain
  • A comprehensive and articulate personal statement
  • A demonstrable interest in the research area associated with the studentship

Desirable Criteria

If the University receives a large number of applicants for the project, the following desirable criteria may be applied to shortlist applicants for interview.

  • First Class Honours (1st) Degree
  • Masters at 70%
  • For VCRS Awards, Masters at 75%
  • Experience using research methods or other approaches relevant to the subject domain
  • Work experience relevant to the proposed project
  • Publications - peer-reviewed
  • Experience of presentation of research findings

Equal Opportunities

The University is an equal opportunities employer and welcomes applicants from all sections of the community, particularly from those with disabilities.

Appointment will be made on merit.

Funding and eligibility

The University offers the following levels of support:

Vice Chancellors Research Studentship (VCRS)

The following scholarship options are available to applicants worldwide:

  • Full Award: (full-time tuition fees + £19,000 (tbc))
  • Part Award: (full-time tuition fees + £9,500)
  • Fees Only Award: (full-time tuition fees)

These scholarships will cover full-time PhD tuition fees for three years (subject to satisfactory academic performance) and will provide a £900 per annum research training support grant (RTSG) to help support the PhD researcher.

Applicants who already hold a doctoral degree or who have been registered on a programme of research leading to the award of a doctoral degree on a full-time basis for more than one year (or part-time equivalent) are NOT eligible to apply for an award.

Please note: you will automatically be entered into the competition for the Full Award, unless you state otherwise in your application.

Department for the Economy (DFE)

The scholarship will cover tuition fees at the Home rate and a maintenance allowance of £19,237 (tbc) per annum for three years (subject to satisfactory academic performance).

This scholarship also comes with £900 per annum for three years as a research training support grant (RTSG) allocation to help support the PhD researcher.

  • Candidates with pre-settled or settled status under the EU Settlement Scheme, who also satisfy a three year residency requirement in the UK prior to the start of the course for which a Studentship is held MAY receive a Studentship covering fees and maintenance.
  • Republic of Ireland (ROI) nationals who satisfy three years’ residency in the UK prior to the start of the course MAY receive a Studentship covering fees and maintenance (ROI nationals don’t need to have pre-settled or settled status under the EU Settlement Scheme to qualify).
  • Other non-ROI EU applicants are ‘International’ are not eligible for this source of funding.
  • Applicants who already hold a doctoral degree or who have been registered on a programme of research leading to the award of a doctoral degree on a full-time basis for more than one year (or part-time equivalent) are NOT eligible to apply for an award.

Due consideration should be given to financing your studies. Further information on cost of living

Recommended reading

  1. A.  Kashevnik et al.: Multimodal Corpus Design for Audio-Visual Speech Recognition  in Vehicle Cabin. In IEEE Access, vol. 9 (2021) 34986-35003. doi:  10.1109/ACCESS.2021.3062752.
  2. Fuhai Chen, Rongrong Ji, Liujuan Cao  (2016) Multimodal learning for view-based 3D object classification,  Neurocomputing, 195, pp. 23-29.
  3. Andreas Eitel Jost Tobias Springenberg  Luciano Spinello Martin Riedmiller Wolfram Burgard (2015) Multimodal Deep  Learning for Robust RGB-D Object Recognition, 2015 IEEE/RSJ International  Conference on Intelligent Robots and Systems (IROS) Congress Center Hamburg  Sept 28 - Oct 2, 2015. Hamburg, Germany.
  4. Jivko Sinapov, Connor Schenck, and  Alexander Stoytchev (2014) Learning Relational Object Categories Using  Behavioral Exploration and Multimodal Perception, 2014 IEEE International  Conference on Robotics & Automation (ICRA) Hong Kong Convention and Exhibition  Center May 31 - June 7, 2014. Hong Kong, China.

The Doctoral College at Ulster University

Key dates

Submission deadline
Monday 26 February 2024
05:00PM

Interview Date
25 April 2024

Preferred student start date
16 September 2024

Applying

Apply Online  

Contact supervisor

Dr Nazmul Siddique

Other supervisors