To use generative AI tools effectively, you need to understand their limitations. Tools like ChatGPT can quickly produce impressively fluent and plausible-sounding text. But it’s crucial to understand that they are doing this by pattern-matching, working out the most probable next line of text after a particular input or prompt.
At this stage, most generative AI tools are not able to research or reliably check facts in the real world. So be careful when using these tools to produce factual or informative content. Use your own judgement and knowledge to ensure any final text is factually correct.
You should not put any confidential or personally identifiable information into a generative AI tool. Text entered into generative AI tools can be used for training and refinement of the AI model. This means it could be seen by humans or shared with other organisations. This presents problems with privacy and intellectual property rights. And on occasion, prompts and text entered into generative AI tools has been accessible to other users. The Information Commissioner's Office (ICO) has a comprehensive and growing set of tools and guidance on data protection and AI.
Generative AI tools rely on large data sets, sourced from publicly available text such as internet content. Data sets like these are vulnerable to bias. This is because they don’t represent all of human experience and can reproduce prejudice and stereotypes, and don't always represent reality.
Developing the LLMs that are needed for generative AI tools needs a significant amount of computer processing power. Running generative AI queries and responses also requires a lot of computer power. The energy demand and emissions from this are significant.
Generative AI tools have been developed by trawling huge datasets of existing content. Some artists and creatives highlight that this means that tech companies could profit at the expense of human creativity. If you wrote a prompt that said ‘draw a charity card illustration of Edinburgh in the style of Van Gogh’, you might get something that was aesthetically pleasing. But a large part of that appeal is Van Gogh’s style, and yet Van Gogh wouldn’t get credited in the AI output. This is even more of an issue for living artists, and artists have sounded their opposition. It’s still unclear how copyright courts will rule on AI making use of existing creative work.
Low-paid labour is an issue because when AI models are being trained, large amounts of training data need to be reviewed and labelled by humans. Specifically, human moderators have to review and label harmful content, to help ensure that an AI tool doesn’t generate and share that kind of content when it is used. Media reports have exposed how some companies have exposed low-paid workers to large amounts of harmful content.
Building on the previous issue around copyright, LLMs and generative AI tools are currently able to produce plausible content because they have been trained on a wide range of human-generated content. Some people have warned that in the future, as AI-generated content proliferates, there is a risk that the quality and originality of AI-generated content might decline over time. At the moment, there is no agreed and foolproof way of tagging AI-generated content, although some companies such as Adobe are experimenting with digital watermarks. These are designed to ensure that AI models are not trained on AI-generated content.
There’s a risk that generative AI tools might lead organisations to decide that some jobs can be done more cheaply and effectively via AI. In the US, the National Eating Disorder Association fired employees who had unionised, and attempted to replace them with a chatbot. Shortly afterwards, the chatbot was found to be sharing harmful advice and had to be switched off. This is an extreme example which attracted a lot of media attention.
Looking at writing and content creation more generally, there will continue to be a need for skilled creatives and editors. If humans did less of our own original writing, we might become less skilled at ensuring that AI-generated content was high quality. Organisations will still need human expertise and insight to ensure that content, however it has been generated, is relevant and accurate.
If voluntary sector workers are able to quickly generate routine responses to simple queries, this potentially frees up time and resource to tackle trickier problems.