This website uses cookies for anonymised analytics and for account authentication. See our privacy and cookies policies for more information.

 




Supporting Scotland's vibrant voluntary sector

Scottish Council for Voluntary Organisations

The Scottish Council for Voluntary Organisations is the membership organisation for Scotland's charities, voluntary organisations and social enterprises. Charity registered in Scotland SC003558. Registered office Mansfield Traquair Centre, 15 Mansfield Place, Edinburgh EH3 6BB.

Careful now! Your emerging AI strategy 

Can you have a strategy for an emerging, frothy technology like AI? Yes, you can, but it needs to be realistic and flexible. In this blog, I’ll highlight some of the areas you should cover. We’ve got more in-depth guidance in our AI guide for the voluntary sector

The core of your strategy should be about improvements that will deliver value even if AI turns out to be less relevant to your context, or takes longer than expected to deliver on the hype. Minding the hype is important – you don’t need a strategy for things that don’t exist (or don’t actually work) yet. Your strategy should also help you spot dodgy solutions that claim lots but deliver little.  

Right across your organisation, you need to create a culture of curiosity and care. This curiosity is about being engaged and hands-on with new tools. It’s also about spotting longer-term trends and seeing how they might impact your context and work. 

The care part is crucial, too. Trust and safety are paramount. In digital, we often talk about ‘failing fast’ – trying out a small pilot to gain insights and spot pitfalls before committing to a full scale project. With AI-based tools and platforms, you need to ‘fail safe’, too. You need to make sure that any testing you do doesn’t put users at risk, or threaten your reputation.  

  1. Users, their context and needs 

User needs won’t change overnight – folk will still live in the same context and want the same things. But they might have growing expectations around how they interact with your services and content. For example, once the tech becomes reliable, users might prefer using a chat-based AI agent to help them find content rather than browsing the web or using a search engine.  

  1. Leadership and culture 

You don’t need to become a technical expert – and even experts are still playing catchup at this stage. But you do need to know enough to spot opportunities and risks and give your team some high-level direction. In particular, you should be able to help your team prioritise and work out where the most impactful changes are. And you should have enough knowledge to make sound judgements about which technologies and approaches are relevant to your context. 

  1. People, skills and confidence 

Closely related to leadership is skills and confidence in your team. Empowering your whole team is important because people at the sharp end of delivery will have the most insight into service users and what they want to do. Combining this insight with careful curiosity about technology will help your organization get to a strategic place. 

Also, making sure that your team feel comfortable in learning and continuously developing in their roles will help them adapt effectively to new technologies like AI. Your team shouldn’t be afraid about being replaced by AI. But they should be open to the possibility that the way that they work may change, in a positive way. 

  1. Data 

Although the focus on generative AI tools has distracted lots of organisations from good data practice, you can’t get away from data. You can’t have effective and reliable AI without good data. For example, you won’t be able to reliably spot patterns in service data unless that data is of high quality. And an AI-based advice tool will still need a store of trusted and reliable advice to work well. 

Data protection and safety is key, too. The ICO has lots of advice in this area. The key principles here are to always have a justifiable basis for any data processing, and to be completely transparent with your users about how you are using their data.  

When it comes to data, there's a real risk of bias and discrimination. This means you should be very cautious about using data-driven or machine learning approaches to high-stakes decisions. And you should make sure that you understand bias or limitations within data you are using.

  1. Content, communications and marketing 

Generative AI has made huge waves in the field of content and communication. But it won’t replace real expertise or editorial judgement. You might be able to use AI tools to help produce routine and straightforward content such as social media posts, and as a sounding board for generating ideas and brainstorming. But you’ll still need to carefully and critically review any draft output. 

New ways of browsing and accessing content will lead to a new type of content strategy, as AI tools allow people to extract relevant highlights from longer-form content.  

  1. Cyber security and safety 

The main short-term risk is around information security. Users may put confidential or personal data into chat-based AI tools. You should ensure that your team don’t do this, as there is a risk that this data may be retained or shared with other users. 

It’s important to be transparent about your use of AI, and any limitations it might have. This will help users trust your services and empower them to give you useful feedback. With AI-based tools, you may not be able to spot every bad consequence during testing. 

Phishing and social engineering attacks (where people are fooled into thinking a message is genuine) are likely to become more widespread and convincing now that hackers have access to generative AI tools. 

Last modified on 3 October 2024