This project is funded by:
This PhD aims to address a critical challenge in artificial intelligence: enhancing the transparency and trustworthiness of "black-box" models through Explainable AI (XAI). As AI systems increasingly impact fields like healthcare and finance, their opaque nature presents significant ethical, legal, and operational concerns. The core issue lies in the trade-off between model accuracy and interpretability. While state-of-the-art AI models, such as deep neural networks, reinforcement learning and generative AI are highly accurate, they often lack transparency. In contrast, simpler models are more interpretable but less capable in handling complex tasks.
This research proposes to bridge this gap by developing novel XAI techniques that maintain model performance while improving interpretability. Moving beyond current methods such as post-hoc explanations (LIME, SHAP, Grad-CAM), this study seeks to establish robust, user-centric approaches that meet the needs of diverse stakeholders, including data scientists, policymakers, and end users.
Key objectives include designing model-agnostic explainability techniques for complex models, real-time interpretability for dynamic applications like healthcare, and customised explanations for different audiences. The project will also integrate fairness and bias detection in high-stakes areas, including healthcare, finance, and law.
The research methodology will combine theoretical framework development, algorithm design, and empirical testing. Theoretical work will draw on information theory, causality, and cognitive science to formalise interpretability. New algorithms will enhance transparency, scalability, and accessibility for technical and non-technical users. Empirical validation using domain-specific datasets will rigorously evaluate interpretability, accuracy, and fairness.
This research will contribute to the field of XAI by developing explanation methods that are robust, fair, and suited to real-time, high-stakes applications. Expected contributions include new interpretability tools, methods for bias detection, and user-friendly solutions to enhance public trust in AI. The proposal aligns with industry and societal demands for ethical AI and promises significant academic and practical advancements.
Applicants should hold, or expect to obtain, a First or Upper Second Class Honours Degree in a subject relevant to the proposed area of study.
We may also consider applications from those who hold equivalent qualifications, for example, a Lower Second Class Honours Degree plus a Master’s Degree with Distinction.
In exceptional circumstances, the University may consider a portfolio of evidence from applicants who have appropriate professional experience which is equivalent to the learning outcomes of an Honours degree in lieu of academic qualifications.
If the University receives a large number of applicants for the project, the following desirable criteria may be applied to shortlist applicants for interview.
The University is an equal opportunities employer and welcomes applicants from all sections of the community, particularly from those with disabilities.
Appointment will be made on merit.
This project is funded by:
Department for the Economy (DFE) Scholarship – UK/ROI Awards
These scholarships will cover tuition fees and provide a maintenance allowance of £19,237 (tbc) per annum for three years (subject to satisfactory academic performance). A Research Training Support Grant (RTSG) of £900 per annum is also available.
To be eligible for these scholarships, applicants must meet the following criteria:
Applicants should also meet the residency criteria which requires that they have lived in the EEA, Switzerland, the UK or Gibraltar for at least the three years preceding the start date of the research degree programme.
Applicants who already hold a doctoral degree or who have been registered on a programme of research leading to the award of a doctoral degree on a full-time basis for more than one year (or part-time equivalent) are NOT eligible to apply for an award.
Due consideration should be given to financing your studies. Further information on cost of living
de Bruijn, H., Warnier, M., & Janssen, M. (2022). The perils and pitfalls of explainable AI: Strategies for explaining algorithmic decision-making. Government Information Quarterly, 39(2), 101666. https://doi.org/10.1016/j.giq.2021.101666
Saeed, W., & Omlin, C. (2023). Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities. Knowledge-Based Systems, 263, 110273. https://doi.org/10.1016/j.knosys.2023.110273
Vainio-Pekka, H., Agbese, M. O., Jantunen, M., Vakkuri, V., Mikkonen, T., Rousi, R., & Abrahamsson, P. (2023). The role of explainable AI in the research field of AI Ethics. ACM Transactions on Interactive Intelligent Systems, 13(4), 1–39. https://doi.org/10.1145/3599974
Submission deadline
Thursday 9 January 2025
04:00PM
Interview Date
24 January 2025
Preferred student start date
31 March 2025
Telephone
Contact by phone
Email
Contact by email