ChatGPT is the most widely known generative AI tool attracting a significant amount of media coverage, interest, and experimentation with the technology.
This technology is not new and you will have probably used it already through email spam filters, media recommendation systems, navigation apps and online chatbots.
ChatGPT was the first experience that many of us had with conversational interfaces allowing us to ask the system to complete tasks iteratively, refining the output that was produced based on our natural language prompts.
The system is trained on large data sets from three primary sources of information:
- Information that is publicly available on the internet
- Information that is licenced from third parties
- Information that users or human trainers provide
Other generative AI technologies include:
ChatGPT is one of several solutions released by the parent company OpenAI. OpenAI started as a not-for-profit Research Lab, subsequently attracting venture capital investment including investment from Microsoft. As a result, the tools are becoming integrated into existing business and personal applications such as web browsers (Microsoft Edge) and the Microsoft Office suite.
As the solutions can be licenced, new plugin architectures are evolving allowing other businesses to integrate their services with the tools. Similar commercial approaches are evolving with other solutions such as Google Bard where functionality will be surfaced in Google Docs. This is an important point - AI tools are, and will continue to be, embedded within applications that we use daily.
The following document offer some further context to Generative AI:
-
The Limitations of Generative AI Tools
The most rapid evolution in these tools has been natural language conversational interfaces which appear to respond in an intelligent human like way.
The tools however do lack other attributes which we consider human, including morality, critical thought, or common-sense judgement. The tools provide outputs based on patterns contained within training data. These outputs may contain bias, be incomplete, out of date, or in some cases entirely false.
While AI tools may be useful for initial planning of work, and for the generation of ideas, apply caution in their use.
They should enhance, but not replace your work outputs.
Always apply critical judgment when reviewing the content generated by Gen AI tools, and be aware of their limitations:
Some of the current limitations of Large Language Model (LLM) AI tools include:
- The tools do not understand the meaning of the words they produce.
- The tools will often generate arguments that are factually wrong.
- The tools will often generate false references and quotations.
- Content generated is not checked for accuracy.
- The tools can distort the truth and emphasise the strength of an opposing argument.
- The tools may struggle to maintain contextual understanding over extended conversations however, there are current developments in this area.
- The tools may struggle to generate responses based on visual and auditory input.
- Generated content can include harmful bias and reinforce stereotypes. These biases can be reinforced through further human interaction with the model.
- The tools rely heavily on data access to generate responses. This has led to concerns about data privacy
- The models are trained on a data set from a Western English-speaking perspective again reinforcing particular perspectives. Developing skills to prompt AI tools is likely to be a useful digital skill but users should also understand the limitations, remain open, curious, and critical when making judgements about the accuracy of the content generated.