Ulster recognises that staff and students are using AI technology now and will continue to do so both in personal and professional settings, indeed AI will be part of many of our students future working lives, and new roles and job opportunities in the sector will follow.
Exploration of these tools, and the associated research are encouraged however, caution must also be applied to ensure the tools are utilised in an ethical and responsible way.
Ulster’s current position on the use of AI in teaching and assessment is to:
- Promote the equitable, ethical and inclusive use of AI for education to benefit staff and students
- Encourage a balanced approach to AI adoption that uses AI as complementary tools while recognising their benefits and limitations
- Provide training and support for staff and students in AI literacy and ensure graduates are prepared for the AI-driven workplace
- Mitigate the risk of AI misuse in assessment by:
- Endorsing and embedding the principle of academic integrity
- integrating AI technologies to enhance learning outcomes, and develop critical thinking, digital literacy and analytical skills
- measuring holistic competencies such as creativity, leadership and teamwork
- promote the appropriate attribution of AI technologies in student work
- Encourage the development of School-based innovations in AI for education to ensure it meets the specific needs of various disciplines
- Encourage pilot projects to test and evaluate the use of AI in education and to build and share an evidence base for its effectiveness
- Continually review and maintain policies, codes of practice, standards, and procedures for protecting University information
AI Information
-
AI Literacy
“Educators need to have at least a basic knowledge of AI and data usage in order to be able to engage positively, critically, and ethically with this technology and to properly use it to exploit its full potential.”
While GenAI tools are effective at analysing, structuring and writing text, or producing videos and images, the outputs are at risk of bias or could simply be inaccurate.
The outputs of GenAI will only be as good as the input prompts, and a good understanding of a given subject/domain will help the GenAI user to critique these outputs. In this way, GenAI is not a substitute for knowledge, judgement, or learning, but it can help with the planning and drafting of work. AI literacy will be a critical 21st Century skill and Ulster University graduates will need to acquire this skill to help them navigate an AI-driven society.
The following blog site has been highlighted by QAA and provides useful insights into supporting critical AI literacy development. The blog site includes further links to explore ethical implications and methods to incorporate AI into our practice:
AI Literacy Skills include:
- a technical understanding of GenAI and its capabilities
- understanding the practical applications of GenAI and how to use the tools
- understanding the ethics associated with GenAI tools and the implications they have on society
The Office for AI (part of the Dept for Science, Innovation & Technology, Gov.UK) is currently conducting research into the skills needed for future workforce training.
Embedding AI Literacy Skills
The AI Working group are developing a strategy to embed critical AI literacy development across curricula. This will include for example:
- Defining critical AI literacy skills
- Providing briefings for Validation/Revalidation panels to explore key themes including AI literacy and misconduct
- Providing briefings for External Examiners during their induction to ensure appropriate scrutiny
- Providing guidance for course/subject teams during course design/redesign to help embed discipline-specific use of AI into curricula and to identify assessments that are vulnerable to Gen AI misuse.
- Including critical AI literacy within professional development programmes (both L&T and Research)
- To encourage course/subject teams to critically review student performance data where potential Gen AI misuse is suspected and to identify and explore any abnormal patterns of marks.
-
Governance and Ethics
While AI creates opportunities for innovation and efficiency, it also creates a range of new risks for the rights and freedoms of individuals, as well as compliance challenges for organisations including Universities. The data protection implications of AI are dependent on the specific use cases, but AI is increasingly ubiquitous both in the workplace and our daily lives, across various digital platforms. This makes a definitive University AI policy challenging. However, appropriate guidance will ensure that staff and students can mitigate these risks and explore the benefits of AI appropriately.
The risk of bias and misinformation
Gen AI tools recognise patterns in data sets but do not understand context and cannot draw conclusions in complex or abstract contexts. Any outputs will require human oversight for sense checking and accuracy.
Automatically generated content can reproduce and amplify patterns of:
- marginalisation
- inequality
- discrimination
This is because GenAI tools learn from data sets built on existing structures and dynamics of a given society, and from the preconceptions and biases of the AI designer. The data sets may not be representative of a population and may be flawed from the outset. This data could also lead to misinformation and there is even a risk of explicit outputs and violent language. Staff and students must be aware of these serious limitations and use their judgment to check outputs for appropriateness and accuracy.
Data Security
As Generative AI learns from data drawn from a wide range of digital platforms, all users must be particularly cautious when inputting data. AI can utilise personal data which has been captured and extracted without the consent of the data subject. While writing prompts for Chatbots for example, you are at risk of sharing personal, proprietary and confidential data and sharing intellectual property. This data will be stored and potentially misused by service providers and re-used for other generated responses. For this reason:
Never input:
- personal information,
- sensitive or confidential data
- copyright protected information.
For further cybersecurity information, access the nine golden rules for staff below:
Working safely online – nine golden rules for staff.
So, using AI to process any personal data has important implications for individuals and for the University’s security risk profile. The University has in place a suite of information and IT policies, codes of practice, standards and procedures for protecting University information and staff are encouraged to review these and to apply online safety rules at all times.
When considering the use of Gen AI tools:
- don't become reliant on using these tools, they can enhance but should not replace your own work and oversight
- rationalise their use and limit the number of uses to reduce data security risks
- approach their use with a healthy dose of paranoia and practice good cybersecurity behaviours
AI Risk Categorisation
Data Related Risks
AI Attacks Testing and Trust
Compliance Learning limitations
AI is only as effective as the data it is trained on.
Data Privacy Attacks
An attack can compromise the privacy of data and infer sensitive information
Incorrect Output
Testing for all combinations of data may not be feasible and can lead to potential gaps in coverage
Policy non-compliance
AI systems may not comply with existing internal policies. AI regulations are still pending.
Data Quality
Poor quality data limits the learning capability of the system and can lead to poor predictions
Training Data Poisoning
Training data is contaminated and can negatively affect AI learning and the output
Lack of Transparency
It is not always clear how personal data is processed in an AI system or for how long data is retained
Adversarial Inputs
An adversary could use malicious and deceptive inputs leading to incorrect predictions
Bias
Depending on the use case (training data), AI outputs could be unfairly biased
Model Extraction
An adversary tries to steal and replicate the model (algorithm)
-
Digital Competencies for the Ethical use of AI and data
The European Commission generated a list of emerging AI competencies for educators. While these are not HE specific, they still provide a useful framework for future proofing skills.
The competencies are summarised below.
Professional Engagement
Using digital technologies for communication, collaboration, and professional development
Competency
Indicators
Is able to critically describe positive and negative impacts of AI and data use in education
Takes an active part in continuous professional learning on AI and learning analytics and their ethical use.
Able to give examples of AI systems and describe their relevance.
Knows how the ethical impact of AI systems is assessed in the institution.
Knows how to initiate and promote strategies across the institution and its wider community that promote ethical and responsible use of AI and data
Understand the basics of AI and learning analytics
Aware that AI algorithms work in ways that are usually not visible or easily understood by users.
Able to interact and give feedback to the AI system to influence what it recommends next.
Aware that sensors used in many digital technologies and applications generate large amounts of data, including personal data, that can be used to train an AI system.
Aware of EU AI ethics guidelines and self-assessment instruments.
Digital resources
Sourcing, creating, and sharing digital resources
Competency
Indicators
Data governance
Aware of the various forms of personal data used in education and training.
Aware of responsibilities in maintaining data security and privacy.
Knows that the processing of personal data is subject to national and EU regulation including GDPR.
Knows that processing of personal data usually cannot be based on user consent in compulsory education.
Knows who has access to student data, how access is monitored, and how long data are retained.
Knows that all EU citizens have the right to not be subject to fully automated decision making.
Able to give examples of sensitive data, including biometric data.
Able to weigh the benefits and risks before allowing third parties to process personal data especially when using AI systems.
AI governance
Knows that AI systems are subject to national and EU regulation (notably AI Act to be adopted).
Able to explain the risk-based approach of the AI Act (to be adopted).
Knows the high-risk AI use cases in education and the associated requirements under the AI Act (when adopted).
Knows how to incorporate AI edited/manipulated digital content in one’s own work and how that work should be credited.
Able to explain key principles of data quality in AI systems.
Teaching and Learning
Managing and orchestrating the use of digital technologies in teaching and learning
Competency
Indicators
Models of learning
Knows that AI systems implement designer’s understanding of what learning is and how learning can be measured; can explain key pedagogic assumptions that underpin a given digital learning system.
Objectives of education
Knows how a given digital system addresses the different social objectives of education (qualification, socialisation, subjectification).
Human agency
Able to consider the AI system impact on teacher autonomy, professional development, and educational innovation.
Considers the sources of unacceptable bias in data-driven AI.
Fairness
Considers risks related to emotional dependency and student self-image when using interactive AI systems and learning analytics.
Humanity
Able to consider the impact of AI and data use on the student community.
Confident in discussing the ethical aspects of AI, and how they influence the way technology is used.
Participates in the development of learning practices that use AI and data
Can explain how ethical principles and values are considered and negotiated in co-design and co-creation of learning practices that use AI and data (linked to learning design).
Assessment
Using digital technologies and strategies to enhance assessment
Competency
Indicators
Personal differences
Aware that students react in different ways to automated feedback.
Algorithmic bias
Considers the sources of unacceptable bias in AI systems and how it can be mitigated.
Cognitive focus
Aware that AI systems assess student progress based on pre-defined domain specific models of knowledge.
Aware that most AI systems do not assess collaboration, social competences, or creativity
New ways to misuse technology
Aware of common ways to manipulate AI-based assessment.
Empowering Learners
Using digital technologies to enhance inclusion, personalisation, and learners’ active engagement
Competency
Indicators
AI addressing learners’ diverse learning needs
Knows the different ways personalised learning systems can adapt their behaviour (content, learning path, pedagogical approach).
Able to explain how a given system can benefit all students, independent of their cognitive, cultural, economic, or physical differences.
Aware that digital learning systems treat different student groups differently.
Able to consider impact on the development of student self-efficiency, self-image, mindset, and cognitive and affective self-regulation skills.
Justified choice
Knows that AI and data use may benefit some learners more than others.
Able to explain what evidence has been used to justify the deployment of a given AI system in the classroom.
Recognises the need for constant monitoring of the outcomes of AI use and to learn from unexpected outcomes.
Facilitating learners’ digital competence
Enabling learners to creatively and responsibly use digital technologies for information, communication, content creation, wellbeing and problem-solving.
Competency
Indicators
AI and Learning Analytics ethics
Able to use AI projects and deployments to help students learn about ethics of AI and data use in education and training.