What can librarians do about the environmental destruction of AI?
Below is an essay I wrote as part of the Master of Information Studies I am currently undertaking (the degree to do to become a librarian).
Also, if you haven't already, please sponsor me for the Fun Run 4 Mutual Aid I am doing. And if you can't donate (or even if you do donate) please spread the word and let others know so more people can experience the joy of giving!
AI and the United Nations Sustainable Development Goals:
How librarians can educate themselves and library users to combat the environmental destruction of AI technologies
When the United Nations (UN) Sustainable Development Goals were released in 2015, AI was not a part of mainstream discourse. In years since, AI technology has exploded in popularity and uptake. As critical thinkers and custodians of the information society, librarians have a responsibility to investigate the sustainability (or lack thereof) of AI technology and spread awareness to library users. This essay will define AI in the context of the knowledge economy, detail environmental impacts of AI with reference to specific UN Sustainable Development Goals, and explore the tensions between AI technology and diversity with reference to Indigenous Knowledge Systems and Indigenous Data Sovereignty. Finally, the essay will make suggestions of how librarians, mostly public librarians, can respond to AI in the information society in a way that supports the UN Sustainable Development Goals.
The term “AI”, which stands for “artificial intelligence”, is primarily a marketing term that it is used to sell a diverse range of technologies by indicating that they are human-like in their “judgment, perception or creativity” (Bender & Hanna, 2025, p. 5). Generative AI uses huge amounts of text, called large language models (LLMs) to produce probabilistically likely strings of word tokens which are interpreted by humans as genuine language (Bender & Hanna, 2025).
With the increased usage of text-based generative AI, an increasing proportion of the information comprising the information society, and being traded in the knowledge economy, is ‘synthetic text’ – created without human intent. According to Godin, there was a shift in the late 1970s in the US and other OECD countries from an economy based in manufacturing to an economy that is “information-oriented”, known as the “information economy” (Godin, 2008, p. 262) or the knowledge economy. In the US, the information sector went from less than 10% of the workforce in 1860 to more than 40% in 1970 (Godin, 2008, p. 262).
Of specific concern to information professionals is the usage of generative AI in research spaces. For example, Google recently rolled out ‘AI Overviews’, in which the first response to a research query is a generative AI response to the search query. This feature is not easy to opt out of. This example is especially relevant to librarians who serve members of the public, who may not understand how the AI Overview is generated, the risk of hallucinations[1], or the need to check the veracity of sources.
In an academic library context, CSU recently launched an AI ‘research assistant’ to their Primo database search, which generates search outcomes from a natural language query[2] as well as a synthetic text summary. In February 2025, the National Library of Australia released a report outlining their approach to adopting and using AI technologies to “ethically unlock the national collection” and develop “new ways to collect, understand and share the collection in the face of evolving regulation” (National Library of Australia, 2025, p. 3).
The training and usage of generative AI threatens the success of several of the UN Sustainable Development Goals. This essay focuses on environmental sustainability. Goal 6 is “Ensure availability and sustainable management of water and sanitation for all” (United Nations, 2015, p. 20). 80% of the global population experience some level of water scarcity for at least one month of the year (Liu et al., 2025, p. 1) and in the future water scarcity is predicted to increase for a greater number of people due to factors including urbanisation, population growth, and climate change (He et al., 2021). Goal 7 is “Ensure access to affordable, reliable, sustainable and modern energy for all” (United Nations, 2015, p. 21). Ensuring all humans have access to basic energy needs is an ambitious goal in light of the challenge of climate change. Goal 13 is “Take urgent action to combat climate change and its impacts” (United Nations, 2015, p. 25).
Crawford points out that while the “technological imaginary” of AI is represented by images such as “glowing brains and blue-tinted binary code floating in space” (2021, p. 48), and “metaphors like “the cloud” imply something floating and delicate within a natural, green industry” (2021, p. 41), in fact these technologies have a material environmental cost (2021). AI technology relies on servers, known as ‘the cloud’, housed in data centres (Crawford, 2021, p. 41). According to Ammanath, AI models use energy at two stages: training and real-life application. Currently the environmental impact is split 20/80 between those two stages. This means that as application is taken up, the computational power demands will become even greater; by 2028, AI could be using more power than whole countries (Ammanath, 2023).
Establishing the exact environmental impact of AI training and usage is difficult because it is obscured as “corporate secrets” (Crawford, 2021, p. 43). Researchers have still attempted this on a small scale, such as one research paper which estimated the carbon footprint of the research from a single Computer-Human-Interaction conference as “between 10,769.63 and 10,925.12 kg CO2e”, which is what it would take to power two homes in the US for one year (Inie et al., 2025, p. 10). This comparison brings to mind Sustainable Development Goal 7; what energy are we budgeting for human use?
As well as energy, AI technology requires huge amounts of water, which is used to cool data centres (da Silva Oliveira, 2025, p. 11). In 2020, ChatGPT-3 used 700,000 litres of fresh water just for training (da Silva Oliveria, 2025, p. 12) – remembering that training is much less resource intensive than application. Da Silva Oliveira writes that it is predicted that by 2027, global AI demand will use somewhere between 4.2 and 6.6 billion cubic metres of the water; for comparison, this is more than half the annual usage of the UK (2025, p. 12). Water is political; while it is essential for human life and sanitation hence its inclusion in Sustainable Development Goal 6, human life and sanitation can be prioritised for profit. Crawford (2021) and da Silva Oliveira (2025) write separately about two different remote towns in the US where the local council made deals with tech companies, guaranteeing them huge amounts of water at good prices, potentially with the hopes that the data centres will boost the area economically. And these are not areas where there is an abundance of water.
Just as biodiversity is essential to sustainability in nature, cultural diversity is essential to sustainability in the information society (Ziemba, 2019, p. 119). In the context of AI technologies, a lack of cultural diversity in datasets as well as the teams of humans in leadership and working across the design, training and monitoring stages of the process can lead to biased and in some cases harmful outcomes (Abdilla et al., 2021, Perera et al., 2025). For example, AI systems used for child protection disproportionately refer Indigenous children to services compared to normal case numbers (Perera et al., 2025, p. 3), and several studies have shown that healthcare disparities are perpetuated in Indigenous community through “unfair decision-making” when AI systems are used in healthcare settings (Perera et al., 2025, p. 13).
In regards to cultural diversity when it comes to Indigenous groups and people, two key issues to consider are Indigenous Knowledge Systems (IKS) and Indigenous Data Sovereignty. According to Perera, IKS refers to intergenerational knowledge systems common to most Indigenous cultures, “situated within an Indigenous paradigm that is always relational, purposeful, and grounded by tribal or ancestral histories”. This is in contrast to Western Scientific Knowledge Systems (WSK), or “respective settler knowledge systems” (Perera, 2025, p. 2). Indigenous Data Sovereignty (IDS) is the right of Indigenous people to protect, control and maintain their own data, including “human and genetic resources, literature, oral traditions, information knowledge about the environment, land, and visual and performing arts”, and this right is protected by Article 31 of the United Nations Declaration on the Rights of Indigenous Peoples (Perera et al., 2025, p. 13).
Respect for IKS is difficult to achieve in a field like AI, which exists within a framework of WSK. Adbilla et al., who experimented with a test case of applying Indigenous Protocols to AI using what they called Country Centred Design[3], found that when they outsourced the coding of a model they had designed to a non-Indigenous colleague, this person brought their own cultural knowledge systems into the project, including reductionism and the grand narrative of ‘survival of the fittest’. This led them to introduce the ‘blackfellas[4] in the loop’ standard, coming from the necessity for Indigenous involvement at every step (2021, pp. 13-14).
The structure of AI systems also makes IDS difficult to protect, in that AI relies on open access to large datasets (Perera, 2025, pp. 12-13). Adbilla et al. conceived of an automated “restricted knowledge protocol”, referred to as “Proof of Aunty”, the function of which was to block restricted knowledge from being revealed to a user without the relevant cultural authority. IDS was also relevant in an experiment Adbilla et al. ran mapping clans, in which they came across the need to work on a protocol for birth rates, which is ‘women’s business’, and the project was discontinued because the only team member briefing the technicians was male (2021, p. 15).
There are several options for librarians to contribute to the UN Sustainable Development Goals in regard to AI. The first step is to educate themselves about AI: what it is, how it is being incorporated into the information society and the GLAM industry, and what the drawbacks of AI are in terms of sustainability. Librarians can use their professional networks for reading recommendations and their critical analysis skills to avoid getting misled by what Bender and Hanna (2025) term “AI hype”. As noted above, AI is being pushed into the GLAMR sector, but librarians can analyse where AI uptake is appropriate and necessary and in line with sustainability goals, and where it is not, enabling them to critically engage with vendors. Librarians can also educate their library users on AI, noting the risks rather than just teaching them how to use the technology.
One example of a library educating users about AI is from Law Library Victoria website, on a webpage called “Artificial Intelligence (AI) in the Law and Hallucinations”, which notes the need to verify AI output (Law Library Victoria, 2025). Several public librarians are running (seemingly uncritical) basic information sessions about AI and how to use it, such as Cambridge Library in Western Australia with “AI Awareness Workshop” (Cambridge Library, 2025) and “AI explained” at Sunshine Coast Council Libraries in Queensland (Sunshine Coast Council, 2025). Other libraries are offering workshops that are more critically engaged, one about AI, misinformation and democracy at Bargoonga Nganjin library in North Fitzroy (City of Yarra, n.d.) and another about AI’s impact on the environment and the information society (State Library Victoria, n.d.).
AI technologies and AI-generated synthetic text is having an increasing influence on the information society. Librarians have an interest in being informed and having a positive influence on the information society, and therefore need to have a thorough understanding of AI technologies which is as unbiased by AI marketing as possible. This essay has explored some of the pitfalls of AI technologies in terms of environmental impact, with specific reference to UN Sustainable Development Goals 6, 7, and 13, relating to access to water and energy, and action on climate change. (There was not scope here to discuss other UN Sustainable Development Goals that AI technologies arguably have a negative impact on.) This essay has also surveyed how AI technologies are not currently well-aligned with Indigenous Knowledge Systems and Indigenous Data Sovereignty principles and therefore threaten cultural diversity, which is an essential element of sustainability. Some suggestions were made on how librarians can support the UN Sustainable Development Goals with regard to AI technologies by educating themselves and their library users.
References
Abdilla, A., Kelleher, M., Shaw, R., & Yunkaporta, T. (2021). Out of the black box: Indigenous protocols for AI. https://hdl.handle.net/10536/DRO/DU:30159239
Ammanath, B. (2023). How to manage AI’s energy demand – today, tomorrow and in the future. World Economic Forum. https://www.weforum.org/stories/2024/04/how-to-manage-ais-energy-demand-today-tomorrow-and-in-the-future/
Bender, E. M. & Hanna, A. (2025). The AI con: How to fight Big Tech’s hype and create the future we want. The Bodley Head.
Cambridge Library. (2025). AI awareness workshop. https://www.cambridge.wa.gov.au/Discover/Whats-On/AI-Awareness-Workshop-Cambridge-Library
City of Yarra. (n.d.). AI, misinformation and democracy with Cory Alpert & Samantha Floreani. https://www.yarracity.vic.gov.au/our-libraries/programs/ai-misinformation-and-democracy-cory-alpert-samantha-floreani
Crawford, K. (2021). The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press.
Da Silva Oliveira, D. G. (2025). The true tragedy of artificial intelligence: On human labor and economy, environment and health. Astrolabio, 30, 1-20. https://doi.org/10.1344/astrolabio.v1i30.49350
Godin, B. (2008). The information economy: The history of a concept through its measurement, 1949-2005. History and Technology, 24(3), 255-287. https://doi.org/10.1080/07341510801900334
He, C., Liu, Z., Wu, J., Pan, X., Fang, Z., Li, J., & Bryan, B. A. (2021). Future global urban water scarcity and potential solutions. Nature Communications, (12)1, Article 4667. https://doi.org/10.1038/s41467-021-25026-3
Inie, N., Falk, J., & Selvan, R. (2025). How CO2STLY is CHI? The carbon footprint of generative AI in HCI research and what we should do about it. CHI '25: Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, Article 206. https://doi.org/10.1145/3706598.3714227
Law Library Victoria. (2025). Artificial Intelligence (AI) in the law and hallucinations. Artificial Intelligence (AI) in the Law and Hallucinations | Law Library Victoria
Liu, W., Fu, Z., van Vliet, M. T. H., Davis, K. F., Ciais, P., Bao, Y., Bai, Y., Du, T., Kang, S., Yin, Z., Fang, Y., & Wada, Y. (2025). Global overlooked multidimensional water scarcity. Proceedings of the National Academy of Sciences – PNAS, 122(26), Article e2413541122. https://doi.org/10.1073/pnas.2413541122
Perera, M., Vidanaarachchi, R., Chandrashekeran, S., Kennedy, M., Kennedy, B., & Halgamuge, S. (2025). Indigenous people and artificial intelligence: A systematic review and future directions. Big Data & Society, 12(2), 1-22. https://doi.org/10.1177/20539517251349170
National Library of Australia. (2025). Artificial Intelligence Framework. https://www.library.gov.au/sites/default/files/documents/2025-03/nla-ai-framework-1-0.pdf
Sunshine Coast Council. (2025). AI explained. https://library.sunshinecoast.qld.gov.au/whats-on/technology/ai-explained
State Library Victoria. (n.d.). Professor Kate Crawford: Mapping planetary AI. https://www.slv.vic.gov.au/whats-on/professor-kate-crawford-mapping-planetary-ai
United Nations. (2015). Transforming our world: The 2030 agenda for sustainable development. https://sdgs.un.org/sites/default/files/publications/21252030%20Agenda%20for%20Sustainable%20Development%20web.pdf
Ziemba, E. (2019). The contribution of ICT adoption to the sustainable information society. Journal of Computer Information Systems, 59(2), 116-126. https://doi.org/10.1080/08874417.2017.1312635
[1] Hallucinations are defined as factual errors made by generative AI synthetic text. These happen because generative AI is designed to give the most probabilistically likely answer.
[2] Natural language queries means asking questions like “What environmental destruction does AI cause?” rather than a search using Boolean Operators such as “Environment AND “artificial intelligence” NOT “marketing hype””)
[3] “Country” in the non-translatable Aboriginal English sense of “a system of language, totemic entities and kinship defined by landforms as markers, within a continental network of territories, rituals and peoples” (Adbilla et al., 2021, p. 5).
[4] “Blackfellas” being another Aboriginal English term that does not refer literally to skintone or gender but in this context to “the right Indigenous person with the right authority in the right context” (Adbilla et al., 2021, p. 13).
Comments ()