This website uses cookies for anonymised analytics and for account authentication. See our privacy and cookies policies for more information.

 




Supporting Scotland's vibrant voluntary sector

Scottish Council for Voluntary Organisations

The Scottish Council for Voluntary Organisations is the membership organisation for Scotland's charities, voluntary organisations and social enterprises. Charity registered in Scotland SC003558. Registered office Mansfield Traquair Centre, 15 Mansfield Place, Edinburgh EH3 6BB.

What a time to be sentient! – an early look at the potential and pitfalls of chat-based AI tools

Generated by Stable Diffusion. Prompt: a photograph of an astronaut riding a horse

Tools like ChatGPT, Google Bard and Stable Diffusion have been hitting the headlines since the start of the year. Why all the hype about generative AI? We’re not all going to get replaced or turned into paperclips next week, but generative AI will have profound impacts on the way that we work within a fairly short timeframe.  

If you can use a search engine, you can work with generative AI. And if you use any written communication in your work, it’s likely to be very helpful. So what are the potential risks and downsides? Although hundreds of millions of people have started using generative AI, it’s still very early days. In this guide I’ll highlight a few of the more obvious questions to illustrate some of the pitfalls. At our DigiShift call in March we had a look at the issues

The power of generative AI is that it allows you to write a simple natural language prompt such as ‘write a short paragraph on how charities could benefit from using AI tools’ and you instantly get a very usable response, such as: 

‘Charities can benefit from using AI tools in several ways. Firstly, AI can help them analyse and process large amounts of data, which can be used to identify areas of need and create more effective strategies for fundraising and outreach. This can lead to better-targeted campaigns and increased donations. Secondly, AI can help charities automate tasks, such as donor communication and donation processing, which can save time and reduce costs. Lastly, AI can also help charities monitor and evaluate the impact of their programs, allowing them to make data-driven decisions and adjust their strategies accordingly.   

To help make sense of all this potential (and the risks), let’s look at four concrete examples. 

  1. Amnesty International image of police in Colombia 
    In early May, Amnesty International circulated (then deleted) an AI-generated image representing oppressive policing in Colombia. Amnesty had captioned the image to declare that it was AI-generated, and argued that it used a generated image to protect the safety of human rights defenders on the ground. However, the backlash was swift, including criticism from journalists on the ground in Colombia who argued that it put their credibility at stake. It seems clear that ‘representing’ rights abuses, rather than documenting them is problematic. 
     
  1. Using ChatGPT to help draft funding applications 
    Some organisations have tried out ChatGPT to help them write funding bids. One organisation has offered a bid-writing ‘AI Bunny’ service online. This seems OK if organisations are providing a detailed outline and checking the resulting text carefully. But there is a real risk that some people will try a one-line prompt and not check the detail. From the funder’s perspective, the superficial quality of writing in bids may improve, but it may become harder to make assessments on which bids to shortlist. The advice here would be: give it a try, but proceed carefully and be prepared to stand behind every word you submit. Funders may need to find new ways of getting past the language to see which ideas are worth backing. 
     
  1. Using ChatGPT to help summarise or analyse long documents 
    A friend tried using ChatGPT to pull out key points from a legal contract. They were surprised as ChatGPT ‘hallucinated’ some terms that were false. This is because tools like ChatGPT don’t go and check facts in the real world, they use probability and lots of computing power to generate the ‘most likely’ response – which will usually seem highly convincing. So for now, you’ll need to carefully check facts yourself. Especially if you’re doing work like checking terms in contracts or looking for key issues in government white papers. 
     
  1. Using ChatGPT to provide advice or guidance 
    Lots of people have had good results using ChatGPT for generic advice such as ‘list 12 ideas for a 5-year-old’s birthday party’ or ‘write me a 3-month plan to get back into running’. It can also help you turn a rough outline into finished prose. But you need to remember that ChatGPT doesn’t have the ability to check facts. And voluntary sector organisations are often advising vulnerable people on high-stakes topics. So you will need to keep bringing in your own expertise and judgement. 

What are ChatGPT and Stable Diffusion?

Generative AI is auto-complete on steroids – these tools are very powerful and easy to use. They enable people working on all kinds of content to get over the ‘blank page problem’.  

ChatGPT is a large language model developed by OpenAI, based on the GPT-3.5 architecture. It works by using a complex neural network trained on a vast amount of text data from the internet to generate human-like responses to user inputs. When a user types in a message or a question, the system analyses the input and generates a response based on its understanding of the language and the context of the conversation. The model is constantly learning and improving, based on the feedback it receives from users and the data it analyses. ChatGPT can understand and generate responses across a wide range of topics and it can be used for various applications such as customer service, language translation, and even creative writing. 

Stable Diffusion and other tools do the same thing, but turn text prompts into images and illustrations. 

Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to perform tasks that typically require human intelligence, such as learning, problem-solving, decision making, and natural language processing. AI systems can be trained to recognize patterns, make predictions, and improve their performance over time, often with minimal human intervention.  

How should we respond now? 

  • With your team, try out ChatGPT and other tools on tasks and subjects you’re working on. This will help you understand more about how it works, and the potential and limitations it has for your context. 
  • Take a course like the Scottish AI Alliance's new Living With AI course: www.livingwithai.me 
  • Work out risky areas and safeguards for your context. For example, if you’re a provider of authoritative and credible guidance, think very carefully about how you’re going to check any content produced in tandem with generative AI. 
  • Expect a lot more highly convincing misinformation. And don’t assume you’d be able to spot false content based on its style and grammar alone. 

What next?

Generative AI tools are being embedded into standard apps such as Microsoft Bing, Google Workspace. All the signs are that big tech players are in a ‘move fast and fix things later’ mode as they look to preserve their market share and competitive edge. 

The open source software community are also starting to get impressive results with smaller Large Language Models. This means that even if big tech players take a more responsible approach, there will be plenty of people out there trying new things out at pace. This means that generative AI tools will be available to all users very soon. 

In practice, this will mean that creating content is likely to become a co-pilot experience, where the author becomes a creative director: prompting, then reviewing, refining and selecting responses from an AI tool. Written and visual content generated by AI will become very widespread. So we’re all going to need extra ways of checking whether content is accurate and truthful.  

Looking further ahead, putting LLMs to work on smaller, specific data sets could help organisations spot patterns and insights to improve their work. 


Three paragraphs in this article were written by ChatGPT 03 May 2023 release.

This article originally appeared in the May 2023 edition of TFN magazine – click here to read more

  

Last modified on 28 November 2023