At SCVO, we’ve just finalised our first AI policy guidelines. Here’s some background on how we put them together. Since SCVO have been advising and supporting charities of all sizes to use AI responsibly since May 2023, you might wonder why it’s taken a while to finalise our own AI use policy.
Like many organisations, we needed an AI policy because staff were already experimenting with and using AI tools, and these left some grey areas that weren’t covered by existing IT and data governance policies. We needed something that struck the right balance between ‘curiosity and care’. Some team members wanted some support and encouragement to try new tools, while others would benefit from clear boundaries to keep them on track.
We didn’t rush into an AI policy for several reasons:
Through my work on our digital evolution team, I have developed a number of resources for responsible AI use, and digital strategy more generally. In developing these I’ve had access to a wide range of AI experts, and also lots of conversations with charities of all sizes, looking to apply AI to their context (responsibly). We were able to draw on these insights in shaping our own draft AI policy.
Like most policies and strategy papers involving digital and technology, an AI policy throws up quite a few questions. Should it be a standalone ploicy, or embedded with other policy areas? Should it devolve judgement to teams, or centralise it? What’s our risk appetite, and what is our level of ambition? Over the longer term, we’d expect AI policy and principles to appear in other policy areas. But for now, we needed a starting point. And a short standalone policy would provide this.
Before getting started on a draft policy, we took time to convene some internal AI in Practice groups. We set these up as safe spaces where people could share both good and bad experiences, and ask any questions they were sitting with. These discussions helped us understand:
We then brought together an expert group of people from HR, IT and data protection roles to review which other areas an AI use policy might need to connect with. While developing the draft, we focused on making clear links to SCVO’s wider values and behaviours. This led to a first draft of an AI use policy, which we sense-checked with the wider AI in practice group. This stage helped us identify some helpful additions and changes.
We now have a ‘good enough for now, safe enough to try’ AI policy which we will review and refine as needed. It’s not our definitive policy on AI and how we use it, but it will be a very valuable starting point. By leading with insights from an AI practice group, we were able to keep the policy grounded and accessible.