We know some of the words and phrases to do with AI may be new, unfamiliar and potentially confusing. Our glossary is here to explain some of the common terms.
Term | Explanation |
---|---|
AI | Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to perform tasks that typically require human intelligence, such as learning, problem-solving, decision-making, and natural language processing. AI systems can be trained to recognize patterns, make predictions, and improve their performance over time, often with minimal human intervention. |
Bias | Bias in data comes about where some people or situations are under or over-represented. This can lead to flawed conclusions. If the data only contains a partial or unrepresentative picture of the real world, it is biased and therefore unreliable. Sometimes, we talk about data 'encoding bias'. This is where the way that data is collected replicates or reinforces historical bias. For example, training an recruitment algorithm based on past hiring practices could build in a bias against female candidates who had been discriminated against in the past. |
ChatGPT | Is a chat-based generative AI tool. It can generate human-like text in response to short or complex inputs (‘prompts’). |
Expertise | In this context, expertise means real-world experience and the ability to distinguish between accurate and inaccurate information. For example, a trained doctor with medical expertise would be able to assess whether brief medical advice was accurate or misleading. |
Fact | A piece of information about the real world that is true. For example, Edinburgh is the capital city of Scotland. We talk about information being factually accurate when it matches reality. Many generative AI tools are not able to reliably generate factually correct results. |
Generative AI | A range of Artificial Intelligence (AI) tools that can create new text, images, video, audio, code or synthetic data. |
Hallucination | In this context, hallucination describes what happens when a generative AI tool returns a false result, but claims it is true. |
Input | Information you put into a generative AI tool, usually a short string of text. |
Large Language Model | This is the data and procedures that enable an AI tool to generate the most likely response to natural-language inputs, for example, prompting with the ‘The sky’ is likely to return the phrase ‘is blue’ |
Misinformation | False information that is shared unintentionally. This frequently happens on social media. Generative AI creates new risk in relation to misinformation because it can quickly produce results that sound plausible, but have little or no connection to the facts. |
Plausible | Something that sounds true, but has not been verified by checking the facts. For example, it seems plausible that the highest village in Scotland is somewhere in the Highlands. In fact, it’s Wanlockhead, in Dumfries & Galloway. |
Probabilistic | Large Language Models use ‘probabilistic’ algorithms to generate their outputs. This means that they predict or guess the next most likely word in a sequence, based on what has been input so far. This means that the AI system doesn’t need to understand the meaning of words in prompts, it just pattern-matches based on the most likely answer to a prompt. |
Prompt | An input (usually text) entered into a generative AI tool. It could be very short ‘write an advert for a café' or much longer ‘write a poster advert for a vegan café in North Lanarkshire emphasising local fresh food’. |
Output/result | What comes out of a generative AI system (text, images, sounds) based on an initial input. Because generative AI tools work in a probabilistic way, the same initial prompt can generate different responses at different times. |
Refining | You can enter additional prompts to many generative AI tools. For example, you might ask a chat-based tool to write an advert for a café in the style of William McGonagall (we advise you not to). |
Training | Developers need to train and develop models to fine-tune their initial parameters. Developers can also optimise AI tools, for example adding additional parameters to try to avoid generating offensive or harmful output. But models can’t usually be retrained or updated by users. |