ChatGPT was the first experience many of us had with conversational interfaces to Large Language Models (LLM) – models trained on large datasets of text sourced from a variety of sources. These conversational interfaces allowed us to ask the system to complete tasks iteratively, refining the output that was produced based on our prompts. The system could also respond to requests to generate computer code, perform common tasks and workflows or to create images and video.
The tools have evolved over several years, from within research laboratories and indeed many of our academic teams, at Ulster, are actively involved in AI research and teaching. In the research world there are some that dispute whether an LLM is really Artificial Intelligence and some who are navigating the debate from a centrist perspective (Pallaghy 2022). Despite different philosophical positions, the simplified interface and resulting reasoned responses provided by ChatGPT seemed like a huge leap in our expectations of these tools. This has resulted in much greater visibility of the opportunities and challenges of these technologies across subject disciplines including creative disciplines where AI can produce images, and video, from natural language prompts.
We believe AI tools will be a part of our personal and professional lives and we wish to explore their use in ethical, transparent, and reasonable ways. It is entirely appropriate to use generative AI as part of your work if you do not claim work generated by AI as your own. You may wish to use the tools as part your workflow to check ideas, or to seek guidance, if you have questions. The technologies are constantly evolving and many of us are discovering new ways to use the tools, often informed by our own professional contexts. Some of the technologies we licence, including Blackboard and Microsoft solutions, will have generative AI functionality in future releases.
-
Course/Module level guidance - keeping students informed
At a Course level, module coordinators should consider the extent to which AI is incorporated into their own teaching and assessment strategies. An example of guidance that can be shared with students (e.g., in course handbooks or via BBL) is outlined below:
Generative AI tools may be used to support your learning and the preparation of coursework. The tools can help you to learn about new concepts and can help you develop digital skills in the process. Some examples of how they can be used are as follows:
- Planning the structure of written work.
- Developing creative ideas and inspiration.
- Answering questions of web-based material.
- Helping to improve writing skills.
- Asking for an explanation of a topic.
However, while these tools can generate content that appears reasonable, they should not be relied upon to be wholly accurate, and you should be aware of their limitations. Some of the current limitations of Large Language Model (LLM) AI tools include
- The tools do not understand the meaning of the words they produce.
- The tools will often generate arguments that are factually wrong.
- The tools will often generate false references and quotations.
- Content generated is not checked for accuracy.
- The tools can distort the truth and emphasise the strength of an opposing argument.
- The tools may struggle to maintain contextual understanding over extended conversations however, there are current developments in this area.
- The tools may struggle to generate responses based on visual and auditory input.
- Generated content can include harmful bias and reinforce stereotypes. These biases can be reinforced through further human interaction with the model.
- The tools rely heavily on data access to generate responses. This has led to concerns about data privacy
- The models are trained on a data set from a Western English-speaking perspective again reinforcing particular perspectives. Developing skills to prompt AI tools is likely to be a useful digital skill but users should also understand the limitations, remain open, curious, and critical when making judgements about the accuracy of the content generated.
Importantly, unless you are specifically asked to do so, you should avoid using GenAI to create content for assessed coursework and research. The use of AI tools must not substitute your critical thinking, problem-solving skills and thought processes. Your work must be original and reflect your own informed perspective and understanding.
When using GenAI technology, you are expected to exercise responsible and ethical practices. This includes:
- checking whether the use of GenAI is permitted for a given piece of assessment
- understanding the limitations and potential biases of AI algorithms and limit their use
- follow cybersecurity principles when using AI tools and never input:
- personal information
- sensitive or confidential data
- copyright protected information
- critically evaluating the outputs of AI generated content,
- maintaining academic integrity by appropriately citing and acknowledging all sources
- keep records of draft work and notes
Acknowledging the use of Generative AI
Where generative artificial intelligence (AI) tools have been used for an assessment, they must be acknowledged appropriately to ensure that any output is not misconstrued as the student’s own work. Before beginning any piece of assessed work, students should check that the use of AI tools is authorised, as this practice may differ across modules and courses of study.
Use the below links to find out more information about citing and referencing AI in the Harvard style for your faculty.
- Citing AI Generative Tools in the Ulster Harvard Referencing style for LHS
- Citing AI Generative Tools in the Ulster Harvard Referencing style for CEBE, AHSS and UUBS
If using a different referencing style to Harvard, please contact your Library Subject Team.
-
Information for staff involved in teaching and assessment
The wider HE discourse has naturally been in relation to assessment and concern about academic integrity. Ulster has a long history of active learning pedagogies combined with authentic assessment design and the current AI in assessment discussions can help us to refocus on assessment design that measures active learning, critical thinking, problem-solving and reasoning skills rather than written assignments measuring declarative knowledge. Personalised, reflective accounts, developed iteratively, as understanding develops, are also valuable approaches and some subject disciplines have been using video and oral presentations to measure understanding and create a more personalised approach to assessment. These diverse approaches to assessment are identified as good practice across the sector; being more inclusive while reducing the risk of plagiarism.
The QAA has recently published guidance on how to approach the assessment of students in a world where students have access to Generative Artificial Intelligence (AI) tools.
There are, however, many practical reasons where alternative assessment or assessment redesign may not be practical, or changes may take time and many colleagues have become curious about AI detection software. Whilst there are tools that claim to detect AI they demonstrate varying levels of reliability. Jisc and the QAA have provided helpful information on these detection tools:
Jisc notes: “AI detectors cannot prove conclusively that text was written by AI.” Michael Webb (17/3/2023), AI writing detectors – concepts and considerations,
-
Experimenting with AI tools
Before experimenting with any generative AI tool, you should give some consideration to privacy. We do not know what data is being collected, by whom, and how it is applied in AI when we use these tools. For this reason, you should not share personal or sensitive data - for instance it would not be appropriate to ask an AI tool to perform some analysis on a dataset containing student data. Currently ChatGPT (the latest version is GPT 4) can be tested free online but be careful, there are also paid for subscriptions.
You might start experimenting with the tool by asking a question such as:
- What are the ethical considerations of using Generative AI?
- Explain AI bias in a way that a child can understand
You can get more specific results by being more specific with your prompts.
- Tell me how [add query] works in 50 words.
- Behave as a higher education lecturer. [Add query]
- Write a four-paragraph summary about [add query]
- My excel spreadsheet has two columns, A & B, how can I find results that are in both columns?
You may also wish to test some subject specific prompts such as
- Build an HTML website homepage with three columns and a hero image
- Can you explain Standard Deviation using an Excel example?
Further guidance
Using AI to implement effective teaching strategies in classrooms (Mollick E & Mollick L, 2023)
Jisc National Centre for AI:
The QAA advises: “Be cautious in your use of tools that claim to detect text generated by AI and advise staff of the institutional position. The output from these tools is unverified and there is evidence that some text generated by AI evades detection. In addition, students may not have given permission to upload their work to these tools or agreed how their data will be stored.”QAA (31/1/2023), The rise of artificial intelligence software and potential risks for academic integrity: briefing paper for higher education providers
OpenAI, as of 24th July 2023, have disabled their own detection service following concerns about accuracy.
A note on Turnitin AI detection service
Turnitin do provide an AI detection service writing which can be integrated within normal grading workflows. Instructors are presented with a prediction of the likelihood of a piece of work being generated by AI tools, such as ChatGPT, when they are grading a paper. However, The AI working group, at Ulster, had many concerns from an ethical, accuracy and privacy perspective and made the decision to disable this tool. This was very much in line with the UK Higher Education sector. Turnitin can provide no evidence as to how the AI score was generated thus making any academic integrity judgement difficult.
-
Assessment design considerations
Ulster has a long history of active learning pedagogies combined with authentic assessment design. Current AI in assessment discussions can help us to refocus on creative assessment design that measures active learning, critical thinking, problem-solving and reasoning skills rather than written assignments measuring declarative knowledge. Personalised, reflective accounts, developed iteratively, as understanding develops, are also valuable approaches and some subject disciplines have been using video and oral presentations to measure understanding and create a more personalised approach to assessment. These diverse approaches to assessment are identified as good practice across the sector; being more inclusive while reducing the risk of plagiarism.
Authentic Assessments for the AI Era
The webinar below focusses on how education providers and instructors are reconsidering effective and authentic feedback and assessment strategies as AI tools and resources become readily available to students around the world
TED Talk: How AI could save (not destroy) Education
Sal Khan, the founder and CEO of Khan Academy, believes AI can spark the "greatest positive transformation education has ever seen."
Further Information on AI in Assessment
- QAA 's 'Reconsidering assessment for the ChatGPT era'
- Advance HE: Authentic assessment in the era of AI
- Rethinking Assessment in the Age of AI
- Perspectives on redesigning assessment practices in an AI world
- Using AI tools in assessment
The AI working group is establishing Faculty/School-based subgroups. The aim of these groups is to explore the use of AI tools within discipline-specific contexts and to reveal examples of good practice, plus associated challenges. Outputs from these groups will be shared on this site in due course.