Some of the information on this page is based on a growing list, curated by the Civic AI observatory, and reproduced with their permission.
The Civic AI observatory is a joint initiative of Nesta and Newspeak House. You can find out more and sign up for their newsletter here. If your organisation has a policy one already, or are planning to make one, you can share it with the Civic AI observatory: hello@civicai.uk
People in your organisation will already be using AI, so you’ll want a policy on employee use. This will be similar to a “use of social media” policy. Individuals and teams across organisations are already actively experimenting with tools such as ChatGPT, and it’s imperative to acknowledge this and introduce appropriate structures to steer experimentation in the right direction.
A policy should cover both opportunities and risks. Employees may benefit from training or allocated time to share and explore AI tools together. Depending on your organisation there may also be pressing issues related to security, privacy, reputational risk, or wider ethical concerns. In general, the policies we have seen have been quite balanced, both recognising the possibilities of the technology as well as warning against the dangers.
Here are a selection of examples which may be useful in forming your own policies:
It is important to think about having a policy for any tools that you’re buying. We expect that many agencies and startups will be selling new AI products so it’s helpful to know how to evaluate them and work out which ones make sense for your organisation.
Here are the best resources we have found so far:
Your customers, members, donors, or other stakeholders may be asking questions about your approach to AI. For many organisations, just publishing the internal policies may be enough. However, in certain sectors such as journalism, AI poses added reputational risks so they are particularly keen to reassure their audiences.
Here are samples we have seen from various news organisations:
Despite a few negative stories, public perception of generative AI use in civic contexts has been surprisingly positive. Some recent research from the Centre for Data Ethics and Innovation tells us that most people are open to the use of foundation models within the public sector, especially in low-risk use cases, provided that there’s human review of outputs and accountability over decisions.
If you have seen or done any other research like this, even anecdotally, let Civic AI know: hello@civicai.uk
There’s no doubt that generative AI will be used to improve digital services and enable new kinds of user experience, but this is very new and how it will look is not yet clear. Working this out will need both technical knowledge of what the models can do and detailed internal knowledge of the organisation and service. Digital product teams can start thinking about simple applications, but model capabilities are still improving quickly and effective design patterns will emerge as the ecosystem matures and tooling improves.
If your organisation has already been using standard machine learning then perhaps you already have some kind of responsible AI policy, but in light of generative models, you may want to update it. There will be many new possible applications that you may choose to avoid or pursue, and you may want to review your approach to disclosure, testing, data privacy, oversight, and so on.
“Non-generative AI” policies are a relatively mature area now - this matrix for selecting responsible AI frameworks gathers and compares many examples - but we haven’t yet seen many examples that account for generative AI specifically:
Note that we’re not talking about using generative AI to help with software development, although there is an entire revolution going on in this area with tools like GitHub Copilot (these should also be covered in your individual use policy!)
Anyone who receives lots of text-heavy documents - for example, grant applications, tenders, job applications, even student essays - will have noticed that their lives have changed in 2023: many more submissions, often of dubious quality. There’s a whole subfield in using AI to evaluate applications - maybe adding bias, maybe removing it - but in the meantime, you might want to provide guidelines for your applicants as to how or if they should use AI to produce their submissions.
We’ve only found one example so far, from The Research Funders Policy Group:
As for education, there’s lots to say here and we’ll likely come back to this in a future issue, but the speed of change is fascinating and might inspire ideas for the emerging problem of written submissions more generally: