Tools like ChatGPT, Google Bard and Stable Diffusion have been hitting the headlines since the start of the year. Why all the hype about generative AI? We’re not all going to get replaced or turned into paperclips next week, but generative AI will have profound impacts on the way that we work within a fairly short timeframe.
If you can use a search engine, you can work with generative AI. And if you use any written communication in your work, it’s likely to be very helpful. So what are the potential risks and downsides? Although hundreds of millions of people have started using generative AI, it’s still very early days. In this guide I’ll highlight a few of the more obvious questions to illustrate some of the pitfalls. At our DigiShift call in March we had a look at the issues.
The power of generative AI is that it allows you to write a simple natural language prompt such as ‘write a short paragraph on how charities could benefit from using AI tools’ and you instantly get a very usable response, such as:
‘Charities can benefit from using AI tools in several ways. Firstly, AI can help them analyse and process large amounts of data, which can be used to identify areas of need and create more effective strategies for fundraising and outreach. This can lead to better-targeted campaigns and increased donations. Secondly, AI can help charities automate tasks, such as donor communication and donation processing, which can save time and reduce costs. Lastly, AI can also help charities monitor and evaluate the impact of their programs, allowing them to make data-driven decisions and adjust their strategies accordingly.’
To help make sense of all this potential (and the risks), let’s look at four concrete examples.
Generative AI is auto-complete on steroids – these tools are very powerful and easy to use. They enable people working on all kinds of content to get over the ‘blank page problem’.
ChatGPT is a large language model developed by OpenAI, based on the GPT-3.5 architecture. It works by using a complex neural network trained on a vast amount of text data from the internet to generate human-like responses to user inputs. When a user types in a message or a question, the system analyses the input and generates a response based on its understanding of the language and the context of the conversation. The model is constantly learning and improving, based on the feedback it receives from users and the data it analyses. ChatGPT can understand and generate responses across a wide range of topics and it can be used for various applications such as customer service, language translation, and even creative writing.
Stable Diffusion and other tools do the same thing, but turn text prompts into images and illustrations.
Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to perform tasks that typically require human intelligence, such as learning, problem-solving, decision making, and natural language processing. AI systems can be trained to recognize patterns, make predictions, and improve their performance over time, often with minimal human intervention.
Generative AI tools are being embedded into standard apps such as Microsoft Bing, Google Workspace. All the signs are that big tech players are in a ‘move fast and fix things later’ mode as they look to preserve their market share and competitive edge.
The open source software community are also starting to get impressive results with smaller Large Language Models. This means that even if big tech players take a more responsible approach, there will be plenty of people out there trying new things out at pace. This means that generative AI tools will be available to all users very soon.
In practice, this will mean that creating content is likely to become a co-pilot experience, where the author becomes a creative director: prompting, then reviewing, refining and selecting responses from an AI tool. Written and visual content generated by AI will become very widespread. So we’re all going to need extra ways of checking whether content is accurate and truthful.
Looking further ahead, putting LLMs to work on smaller, specific data sets could help organisations spot patterns and insights to improve their work.
Three paragraphs in this article were written by ChatGPT 03 May 2023 release.
This article originally appeared in the May 2023 edition of TFN magazine – click here to read more