This website uses cookies for anonymised analytics and for account authentication. See our privacy and cookies policies for more information.

 




Supporting Scotland's vibrant voluntary sector

Scottish Council for Voluntary Organisations

The Scottish Council for Voluntary Organisations is the membership organisation for Scotland's charities, voluntary organisations and social enterprises. Charity registered in Scotland SC003558. Registered office Mansfield Traquair Centre, 15 Mansfield Place, Edinburgh EH3 6BB.

25 questions charities are asking about AI in 2025

This article covers audience questions that came up during a panel discussion on ‘Putting People and Values at the Heart of AI’ at The Gathering, SCVO’s national convention for charities, on 5 February 2025

We received around 40 questions during the live session, and have grouped and de-duplicated them into five main categories:

  • Ethics
  • Bias & Racism
  • Geopolitics
  • Environment
  • Practical questions

We’ve covered audience questions about the Caddy tool from Citizen’s Advice in a separate blog.

  1. Who is or should be accountable for unethical AI practises?
    AI developers and companies are responsible for developing products that are safe, transparent and inclusive by design. In practice, this doesn’t always happen as some companies are driven by commercial pressures or lack insight about how to develop products safely.
    However, even well-designed products can be used unethically (for instance, you could use a phone to help someone, or to defraud them). So users of AI and associated technology are responsible for using these systems safely and ethically. Where you have serious concerns about the ethics involved in developing an AI tool, you might choose to avoid using that tool altogether.
    In addition, policymakers and governments have a responsibility to create policies and legislation that ensure that the interests and safety of people are protected, especially considering people who are most negatively impacted by AI. Policies and guidelines, regulations, and risk assessments are some of the tools for accountability that you/third sector organisations could ask for ensure AI are inclusive and responsible by design.
  2. Are we responsible for adopting ethical AI, knowing that if we don't use it, others might, potentially putting us at a disadvantage?
    Is it a disadvantage to act in line with your ethics and values?
    Charities and organisations working in the social impact sector are uniquely trusted by the people we work with, and the people who support us. Compromising on our values could be far more damaging than any productivity ‘advantage’ we might see. Many of the claims about productivity gains from AI are speculative, while the risks and harms are real.
    There is already strong evidence to suggest some AI tools, including generative AI and predictive modelling, can be highly inaccurate, biased, stereotyped, and spread misinformation or fabricate data or sources of information. Choosing not to adopt AI tools does not mean you are missing out, in fact, you can be avoiding misinformation or discrimination being perpetuated in your work. There is also additional labour and stress on individuals associated with verifying or correcting generative AI responses, for example, which lead to inefficiencies and create unnecessary work.
  3. Are there any not-for-profit AI providers?
    We don’t know of any AI providers operating on a fully charitable basis. But some providers are specialising in the social impact sector, for example, Helpfirst worked with Citizen’s Advice Scotland through the Civtech programme, and work from Citizen’s Advice in Stockport, Oldham, Rochdale and Trafford is being scaled and open-sourced nationally through the UK Government’s AI Incubator. There are, however, some international examples that show grass-root community actions on AI and data, such as grassroots data activism to end gender-related violence in Latin America.
  4. How can we avoid profit being a motivator for companies using increased AI? What can the third sector do to advocate for/demand development of AI in a responsible and ethical way?
    In a market-based economy, profit will inevitably remain a driver for product development. However, spelling out what trustworthy, ethical and inclusive AI is and should be will help developers and vendors understand the need for responsible AI. For instance, charities often work with people and communities in situations that are overlooked or ignored by mainstream companies.
    Setting out a clear vision of how AI should work if it is to work for everyone will help developers and vendors understand what they need to do to serve society as a whole. And inclusive and ethical AI products will help everyone, not just those who are vulnerable. There is advocacy and policy influencing work done by digital/AI rights organisations that are worth supporting and find inspirations from, such as the European Network Against Racism on the EU AI Act and the Algorihmic Justice League and the Stop LAPD Spying Coalition in the US.
  5. At some point, if AI is ‘successful’ - trusted, efficient, fair - will it lead to a loss of jobs and the people workforce (can the need for human involvement in processes be sustained)?
    AI and automation are definitely changing the nature of work, though it’s unlikely that they will lead to large-scale job losses in the near future. The Institute for the Future of Work’s Pissarides Review is a good example of recent in-depth research into these issues.
  6. Should AI be open source with access to it a basic human right?
    Open source software and technology is definitely more inclusive, as it allows people and communities at all income levels to access it. Some companies are emphasising an open source approach, but large tech companies are often still motivated by commercial gain rather than wider social benefit.
    Asserting a right to technology is one thing, but making it happen in practice is more challenging. The AI or its training dataset being open source is one way think about rights to access, it is also important to think about open documentation of how the AI model is developed, tested, evaluated, and functions. Having access to open source technology might not completely level the playing field if some communities still lack training and access to technology to run the applications and software.
  7. How do you speak to the public about the negatives of AI, and its truthfulness. Ie the hallucinations
    It’s important to note that GenAI tools (eg ChatGPT, Claude, Gemini) are not reliably connected to the facts. They are designed to help users generate new content by synthesising text and images, not as sources of truth. So they are more helpful as creative drafting tools than for checking facts or completing research. The Good Things Foundation has published a useful report on AI literacy, including supporting AI literacy in community spaces. 
  8. AI has already been used with massive harm in Gaza. How can steps be taken to prevent use of AI in militaries?
    There are NGOs active in this space, for instance ICRAC brings together experts to call for arms control, and DroneWars shines a light on the growing use of lethal drones. At various points, protests from staff members in companies such as Microsoft, Google and others have prompted a change in approach. But results are mixed and in some cases international norms are being eroded. Scholars such as Timnit Gebru have long been studying the roots and traceable history of AI intertwined with the military.
  9. Does the panel worry about existential risk from unaligned AI?
    Probably not – there are lots of actual harms and risks to be aware of. Increasing evidence suggests that the fixation on the future risks of AI on humanity is a distraction from focusing on the harms that AI are already causing on humanity right now.
  1. Is AI more or less racist/biased than people making decisions?
    The racial bias of AI is well-documented and many AI tools perpetuate discrimination, racism, racial-based stereotypes, mis-information and toxicity. Abeba Birhane’s work highlights how AI amplifies not only biases in datasets but also historical and emerging racial injustice. There is also a well-established view that AI tools inherit and replicate the bias of the developers who created them, as technology ultimately reflects (or represents) the biases and choices of its creators consciously and consciously in several ways. Much evidence clearly rejects claims that AI somehow makes more objective decisions than humans or remove subjectivity in decision-making when automated by algorithms, but in fact make those biases hidden or introduce new biases during the development process.
  2. How would you recommend addressing the gender imbalance and supporting women with using AI?
    There is a great amount of grass-root level work termed as ‘data feminism’ by Catherine D’Ignazio and Lauren Klein that provides inspirational examples about amplifying women’s voice and actions in AI.  See an introduction from the Catalyst on the need for Feminist AI. Other key groups are Diverse AI and Women in AI, and many of the amazing organisations trying to tackle gender inequality and inequity in technology as a whole. The Scottish AI Alliance are working with the Young Women's Movement on exploring AI and how it impacts young women.
  3. How do we minimise the impact of data colonialism caused by AI training?
    Centering people’s voice and expertise (by learning or lived experience) is essential in ensuring that the people who are most negatively impacted by AI is included in the scrutiny and decision or policy-making of AI.
    The work of the Participatory Harm Auditing Workbenches and Methodologies (PHAWM) project is an example of how power dynamics can be flipped and enable non-AI experts, including the public and practitioners, to audit the harm and fairness of AI. Public participation and inclusion in decision-making is important to be considered throughout the AI development cycle, and not only as an after-thought. Participation should also be properly resourced and informed by the feminist traditions of ethics of care and anti-racism principles, in order to not repeat extractivist or tokenistic ways of consultation. In addition, recognition and awareness of how systems of power, including racism and colonialism, can be perpetuated by AI models and algorithms – for developers, policymakers, third sector, and the general public – will be critical to prevent oppressive systems of power and colonialism being reinforced in AI training and the field of AI more generally. The lack of understanding and attention towards the historical roots and current or emerging forms of oppression of people through colonialism, is a common issue in causing racial bias and discrimination in AI, including how racialisation of people are replicated in real-life data and AI training.
  4. Does lived-experience run across, or run counter to, eliminating the bias that may exist within these systems?
    People’s lived experience is critical to providing a much-needed perspective to identify and mitigate bias and discrimination in AI. Lived experience needs to be seen as valuable and important as technical expertise in the AI development process. Mark Wong has published and spoken extensively about the importance of valuing the expertise of adversely racialised people to counter racial bias and racism in AI and data systems. The use of co-creation methods is also extremely important to mitigate the biases and discrimination in AI and data. The Minortised Ethnic People’s Code of Practice for Equitable Digital Services provide useful pointers on values and principles that should underpin equitable design of AI and data systems in public services, for example.
  1. With US and China the power players in AI, how can Scotland/UK/EU hope to influence responsible development?
    Scotland and the UK are probably not in a position to compete at the ‘bleeding edge’ of AI technology. But there is a real opportunity to head for a different finish line – trustworthy, ethical and inclusive AI which provides reliable value for everyone.
  2. Very recently Chinese AI company Deepseek has released what looks like a much less polluting AI of which many parts are Open Source. Is this good news? Should Scottish Charities be looking into it?
    With any new tool, it’s wise to approach with caution. The narrative is that it's less polluting but if Deepseek achieved its results based on the distillation of what OpenAI and Meta's models have already done, it’s probably not any less polluting in practice. But it's interesting to have a new player in the game and perhaps dilute the current monopoly a bit.
  1. How do we address the massive environmental impact of AI use?
    The Scottish AI Alliance have published a new blog on the environmental impacts of AI. The environmental impacts are significant: estimates vary, but most GenAI tools use 20x to 60x the amount of energy per prompt than a search query. Meanwhile, large tech companies such as Microsoft and Google have abandoned their commitment to reach net zero by 2023, as they bring many more data centres online. Developing and running AI models requires large amounts of new hardware (eg hundreds of thousands of Nvida chips). New AI tools have prompted phone manufacturers into a ‘super upgrade’ cycle, which means that consumers are being pressured to change and upgrade smartphones more frequently.
  2. Should organisations include their use of AI when measuring their carbon footprint?
    AI and digital technology use does have environmental impacts, but at an individual and organisational level, it’s probably not amongst your largest environmental impacts. You may wish to measure and report your AI use if you are using it intensively, ie for image generation, working with very large data sets or other processes that require a lot of computation.
  3. Which AI models are most environmentally friendly?
    This is fast-moving and slightly technical subject, but the Hugging Face machine learning/AI community recently started publishing an AI Energy Use leaderboard.
  1. How safe is it to have chat GPT on work devices?
    You will need to make your own assessment based on your context. But it’s worth reviewing the app permissions to ChatGPT and where the app will be storing (and sharing) data. In general, you should avoid putting any sensitive or personal data into a Generative AI tool, as it is hard to have complete confidence about how this data might be used in future. For example, it may be used in training data in the future.
    Having said that, identifying recognised and safe AI tools to use in a work context will help avoid the situation where your team use them on personal devices ‘under the radar’ which is considerably more risky.
  2. Public Literacy: What conversations are being held as to AI skills development for the general population?
    The Scottish AI Alliance have just reopened their popular ‘Living with AI’ online course. This free course is designed to be accessible to any member of the public to help them develop AI literacy and an ability to critically engage with it.
    If you’re looking to learn as an organisation, you can check out events from DataKind. There is also a great new resource at https://www.aiplaybookforcharities.com/
  3. 75% charities have under £100k funding.  How can we make AI education and test and learn projects accessible to the majority of our sector. If we don’t the gap between our sector will increase.
    Off the shelf tools are becoming commodity technology, hence affordable for most. You can easily set up a small test and learn project, where you make a basic service or way of working to test an idea. This will typically take a few hours’ work and may not require any paid subscriptions.
    For any longer-term use of tech, you’ll need to work out whether costs involved in building it are justified by the benefits you gain by using it.There is a wider issue where charities lack surplus budgets to invest in new tech, and are usually not able to increase costs in the way that the private sector can. But a well-designed test and learn project should enable you to make a clear judgement about whether using AI (or any technology) generates enough savings to justify its cost.
  4. Is it possible to have a way of getting ethics clearance for projects that use Ai for people from vulnerable groups in the same way you can at universities for people who are not working in academia?
    There is no simple tool to secure an ethics clearance. When working with vulnerable groups, you should proceed with a great deal of caution to ensure that people are not harmed and their rights are protected. There are a number of useful resources, for example:
    https://www.primecommunities.online/code-of-practice
    ICO guidance on fairness and discrimination
    https://www.adalovelaceinstitute.org/report/participatory-data-stewardship/
    https://www.equalityhumanrights.com/guidance/artificial-intelligence-public-services
  5. If we use AI to create materials - who owns them?
    This is a multi-billion dollar question – which is the focus of a number of high-profile campaigns from creative industries, and significant lawsuits from copyright holders. The UK Government has recently been consulting the public on revisions to copyright legislation to remove ‘barriers’ to AI development. But this move has attracted significant controversy.
  6. How do you know which fundraising platforms are 'ethical, fair and transparent' in their approach of AI? Does the Scottish AI alliance have an assessment tool which could be utilised?
    There is no dedicated assessment tool for this. You will need to make your own judgement, referring to your organisation’s ethics, and values, and your ethical fundraising policy if you have one. You should also review the charity fundraising code of practice and guidance from the ICO. With any fundraising platform, you’ll want to understand their pricing and fee model and satisfy yourself that this is ethical and aligns with your values. Some funders have published guidance on the use of AI in funding applications.
  7. How should charitable/voluntary orgs prepare for their clients possibly using "AI agents"?
    AI agents are emerging tools, where a user can get an AI platform to perform tasks such as making phone calls or completing web forms. Some software companies claim this will be a new form of user experience, where we complete online activities by asking an agent to perform simple tasks for us, rather than searching, browsing and completing tasks ourselves. This technology is still at an early stage, and still faces issues such as accuracy, and privacy of user data.
    There is probably not an immediate need for charitable organisations to prepare for this technology. But in the longer term, it emphasises the need for up-to-date, well structured web content. Making your website accessible and accurate will deliver immediate benefit to users today, while making it ready for AI tools and agents in the future.
    If you’re operating an enquiry line email inbox, or signup form, you might want to consider how you’d respond to enquiries mediated via an AI agent. Would you want to block these queries at an early stage, if you’re unable to identify a human user? Or might you want to allow them, on the basis of making a service accessible to people who need tech support to communicate well?

Our work to help organisations grow their digital capacity is supported by: