Atlas of AI: Unpacking the Hidden Costs of Artificial Intelligence

Hall of AI Legends - Journey Through Tech with Visionaries and Innovation

Share This Post

Atlas of AI: Unpacking the Hidden Costs of Artificial Intelligence

In her groundbreaking book Atlas of AI, Kate Crawford delves deep into the often-overlooked dimensions of artificial intelligence. She challenges the common perception of AI as a neutral, abstract technology, uncovering the deeply extractive industry behind it. Crawford’s work highlights how AI is not just a collection of code and algorithms, but a system built on the exploitation of natural resources, human labor, and ecosystems. AI, Crawford argues, is far from a benign tool for progress; it is an industry with significant hidden costs, from mining rare minerals to unpaid labor.

Crawford’s Atlas of AI reveals the darker side of AI that leaders in tech and policy need to understand if they aim to align AI strategy with broader environmental and labor responsibilities. Through her in-depth research, Crawford sheds light on the unseen mechanisms that power AI, pushing us to rethink the true cost of the technology that is often hailed as a force for good.

Exploring the Hidden Costs of AI: A Deeper Dive

In this blog, we will unpack the key insights from Crawford’s book, exploring the mineral extraction, energy consumption, and human labor that sustain AI, as well as the e-waste created at the end of its life cycle. By delving into these hidden costs, we’ll explore how we can better align AI development with ethical and sustainable practices. The goal is to understand the broader implications of AI and advocate for a future where its development is as responsible as it is innovative.

Mineral Extraction & Hardware: The Hidden Cost of Rare Earth Metals

One of the foundational elements of AI technology is the hardware that powers it, from the microchips running in servers to the batteries that fuel smartphones and other electronic devices. These technologies rely heavily on a series of rare earth minerals—minerals such as lithium, cobalt, and graphite—which are crucial for the functioning of modern electronics, including AI systems. However, the extraction and processing of these minerals come with significant environmental and human costs that are often overlooked in the broader conversation about the benefits of AI.

Lithium: The Water-Intensive Process Behind AI Power

Lithium is essential for the batteries that power everything from electric cars to smartphones, making it a cornerstone of the AI hardware infrastructure. As AI adoption grows, so too does the demand for lithium, particularly for lithium-ion batteries. These batteries are key components not only for AI-driven devices but also for the energy storage systems needed to support AI’s energy-hungry cloud computing infrastructure.

However, the extraction of lithium is a highly water-intensive process that can have devastating effects on the local environment and communities. Lithium is primarily extracted from saline groundwater in salt flats, where the process of evaporation and concentration takes months and depletes large quantities of water. The majority of the world’s lithium reserves are concentrated in the Lithium Triangle, spanning Chile, Argentina, and Bolivia. These regions are already water-scarce, and the extraction of lithium further exacerbates the water crisis, threatening the livelihoods of local communities who rely on freshwater for farming and daily consumption.

In Chile, for example, mining operations consume millions of liters of water per day, impacting nearby agricultural systems and contributing to land degradation. Water scarcity in these regions, combined with the economic dependency on mining, creates a vicious cycle where local populations face resource depletion without the benefits of the revenues generated by lithium extraction. Crawford points out that these communities—often indigenous populations—are left with the environmental fallout, while much of the economic value of lithium extraction is siphoned off by multinational corporations and foreign investors.

Cobalt: The Human Cost Behind AI’s Power

Similarly, cobalt—another critical material used in the production of batteries—is central to AI hardware, particularly for energy storage systems and electric vehicle batteries. Over 60% of the world’s cobalt supply comes from the Democratic Republic of Congo (DRC), a country rife with human rights abuses, corruption, and political instability. Cobalt mining in the DRC is not only environmentally hazardous but also deeply exploitative in terms of labor practices.

Much of the cobalt is extracted by artisanal miners—often including children—who work in extremely dangerous conditions. These miners work in unregulated, high-risk environments without proper safety equipment, leading to frequent accidents, respiratory diseases, and fatalities. The mines are often shallow and unstable, and miners are exposed to toxic dust and chemicals. Despite the high value of cobalt in the tech industry, workers in the DRC earn a pittance for their labor. The communities surrounding these mines face health hazards, including lead poisoning and respiratory illnesses from prolonged exposure to the toxic substances used in the extraction process.

In addition to the human toll, the environmental impact of cobalt mining is severe. Deforestation, soil erosion, and pollution from mining chemicals and tailings contaminate the surrounding ecosystems, affecting local agriculture and wildlife. Crawford emphasizes that cobalt mining, driven by demand from AI and tech industries, is a key example of how the digital revolution is underpinned by unsustainable practices. The lack of proper regulation and poor working conditions highlight the need for greater accountability within tech supply chains.

Ethical Concerns: AI’s Reliance on Exploitation

The ethical concerns surrounding the extraction of rare earth minerals like lithium and cobalt are not limited to environmental degradation. Crawford highlights how these industries are built on exploitation—both environmental and human. The tech industry’s heavy reliance on these materials creates a moral dilemma for companies that champion innovation while simultaneously benefiting from harmful practices.

AI companies, in particular, have a vested interest in ensuring that their supply chains are free from exploitation. However, Crawford points out that many of the largest AI companies have failed to take responsibility for the conditions under which their hardware is produced. The lack of transparency in the sourcing of minerals and resources means that consumers, as well as policymakers, are often unaware of the true cost of the technology they use.

Crawford calls for greater accountability within the tech industry, urging companies to not only address their supply chains but to also actively work to mitigate the harmful effects of mineral extraction. Tech giants like Apple, Google, and Tesla, who rely on these materials, must recognize their role in perpetuating these unethical practices and work toward ethical sourcing and sustainable mining practices. Crawford advocates for a shift toward responsible innovation, where environmental stewardship and human rights are prioritized alongside technological progress.

The Need for Accountability in AI Supply Chains

By highlighting the hidden costs of mineral extraction, Crawford’s Atlas of AI calls attention to the urgency of creating a more ethical and sustainable model for the AI industry. She stresses that, in order for AI to be a truly transformative force, it must evolve in a way that acknowledges and addresses its significant environmental and social costs. This involves reassessing the supply chains that power AI hardware and ensuring that the extraction of materials does not come at the expense of vulnerable communities or the planet.

Crawford calls for greater transparency in AI companies’ operations. The lack of visibility into the supply chains of tech companies has allowed these issues to persist unchecked. Crawford argues that ethical sourcing of rare minerals and better regulatory oversight are essential to reducing the harm caused by mineral extraction. Companies that rely on AI-powered hardware should adopt transparent policies, ensuring that they source materials from responsible suppliers who adhere to human rights standards and minimize environmental damage.

In conclusion, Crawford’s work underscores the urgent need for reform in the AI industry. The extraction of rare minerals like lithium and cobalt is not a problem that can be solved by technology alone. It requires a global effort to ensure that AI’s promise of progress is not built on the exploitation of people or the destruction of ecosystems. Leaders in tech must embrace accountability and push for sustainable practices that ensure AI is truly beneficial for all of society, rather than perpetuating the cycle of exploitation that underpins its development.

Cloud Infrastructure’s Energy and Water Footprint: The Hidden Environmental Costs of AI

While AI is often thought of as a virtual abstraction, its actual infrastructure is deeply resource-intensive, particularly when it comes to the energy and water required to run cloud-based services. As AI models grow in complexity and scale, they rely on massive networks of data centers that store and process the enormous amounts of data these systems need to function. These data centers, while seemingly invisible to the end user, are energy-hungry machines that consume vast amounts of electricity to power their servers, as well as water to cool the systems that run 24/7.

Cloud computing platforms, the backbone of modern AI services, are not just a convenience—they are a critical part of the digital infrastructure that powers everything from machine learning algorithms to deep learning models. However, the infrastructure that powers AI is anything but invisible. It is an energy-intensive operation that raises serious concerns about the environmental footprint of AI technologies. Crawford’s Atlas of AI provides a crucial examination of how cloud infrastructure impacts both our energy grids and water resources—two of the most critical elements in sustaining the tech industry’s rapid growth.

The Energy Consumption of Data Centers: A Growing Problem

The energy consumption of cloud computing platforms is staggering, and it’s growing every year. Data centers, which host the servers and storage devices that power AI models, are estimated to account for about 1% of global electricity use. While this may seem like a small percentage, it’s an enormous amount of energy given the global scale of AI and tech services. This number is expected to increase significantly as the demand for AI services continues to rise, particularly with the growth of machine learning, deep learning, and other advanced AI technologies that require even more computational power.

Fossil Fuels and Carbon Emissions

A significant portion of the energy consumed by data centers comes from non-renewable sources such as coal, natural gas, and oil. As Crawford points out in Atlas of AI, this means that the carbon footprint of AI and cloud computing is already substantial, and it is poised to grow as the industry expands. The demand for more powerful hardware and the need to process larger datasets means that more energy will be required to maintain and operate these data centers, pushing the industry further into reliance on fossil fuels.

The rise of AI technologies like self-driving cars, speech recognition, and real-time language translation demand ever-larger data centers capable of handling vast volumes of data. These increasingly complex applications require continuous, high-capacity computing power. As a result, the energy demand of these systems is expected to grow exponentially, potentially making the AI sector a major contributor to global carbon emissions unless more sustainable energy practices are adopted.

Water Usage in Cloud Computing: A Hidden Cost

Alongside the growing energy consumption of data centers, water usage is another often overlooked but crucial resource required to keep these facilities running. Cooling systems in data centers are necessary to prevent the servers from overheating. Given that servers work continuously to process large volumes of data, cooling systems must operate constantly, consuming significant amounts of water to absorb the heat produced by the servers.

Water Scarcity and Data Center Cooling

The environmental impact of cooling is particularly concerning in arid regions where water resources are already scarce. As data centers are often located in areas with low water availability—such as the Western United States, parts of Australia, and certain regions in Europe—the stress placed on local water supplies becomes a serious concern. Crawford’s book underscores how data centers in these regions exacerbate the water scarcity crisis, as large volumes of water are diverted from agricultural and domestic use to cool servers, often without any significant concern for the local communities or ecosystems affected.

In some regions, the water footprint of cloud services is almost as significant as the carbon footprint, contributing to a double environmental burden. The water used for cooling is typically not returned to the environment in a sustainable manner; rather, it is often discharged at higher temperatures, which can harm local waterways and disrupt surrounding ecosystems. The thermal pollution caused by these cooling systems can negatively affect aquatic life, including fish and plants that rely on cooler water temperatures.

Sustainable Practices: The Path Forward for Tech Companies

Crawford calls for greater awareness of how cloud infrastructure impacts both our energy grid and water systems. As the demand for AI and cloud-based services continues to surge, it’s clear that sustainable practices must become a priority for tech companies. While many companies are making strides toward renewable energy adoption, there is still much more that can be done to mitigate the environmental impact of AI technologies.

Renewable Energy Sources

One of the most crucial steps in reducing the environmental footprint of AI is the shift toward renewable energy sources. Cloud computing companies like Google, Microsoft, and Amazon have already made significant progress by transitioning their data centers to 100% renewable energy in some regions. These initiatives show that it is possible to run large-scale data centers on solar, wind, and hydropower. Crawford argues that these companies need to expand these efforts globally, integrating sustainable energy solutions into all of their data centers and working with local governments to facilitate access to clean energy sources.

Water-Efficient Cooling Technologies

Another area where sustainable practices can make a difference is in the cooling systems used in data centers. Many tech companies are exploring more water-efficient cooling technologies, such as liquid cooling or evaporative cooling, which use far less water than traditional air-based systems. Additionally, recycling water used for cooling or using water from non-potable sources—such as reclaimed or industrial water—can significantly reduce the strain on local water resources.

Tech companies must also consider the location of their data centers. By selecting sites in regions with abundant water resources, or regions where natural cooling systems can be used, companies can reduce their dependence on traditional water-intensive cooling methods. For instance, Amazon Web Services (AWS) has explored data centers located in colder climates to take advantage of natural cooling and reduce their reliance on water for cooling purposes.

The Need for Greater Transparency and Accountability

Ultimately, Crawford argues that greater transparency is needed in how companies report and manage their environmental impacts, particularly in the cloud computing sector. Tech companies must not only reduce their energy and water consumption but also ensure that their environmental practices are transparent and accountable. This includes disclosing the sources of energy used to power data centers, the volume of water used for cooling, and the steps taken to reduce both their carbon footprint and water footprint.

To move forward responsibly, AI companies must adopt sustainable business practices that prioritize environmental stewardship. This includes investing in clean energy, adopting efficient cooling technologies, and committing to water conservation. It also means ensuring that AI systems are not just powerful and efficient but are developed in a way that minimizes environmental degradation and promotes a sustainable future for all.

Rethinking AI’s Environmental Footprint

Crawford’s Atlas of AI pushes us to reconsider the environmental impact of AI and cloud infrastructure. By shining a light on the hidden energy and water costs of running AI systems, she calls for greater responsibility within the tech industry. As AI continues to expand, the environmental costs of cloud computing must be acknowledged and addressed.

The future of AI should be built on a foundation of sustainability and accountability. By making responsible energy sourcing and water-efficient cooling systems a priority, the AI industry can continue to innovate while minimizing its environmental impact. Only by recognizing the full cost of cloud infrastructure can AI truly fulfill its potential as a force for good, balancing technological advancement with environmental responsibility.

Ghost Labor & Unpaid Data Annotation Processes: The Invisible Backbone of AI

Another often-overlooked cost of AI is the human labor that powers it. While much of the conversation around artificial intelligence tends to focus on the algorithms, hardware, and models that drive AI systems, the labor behind these technologies is rarely discussed. A critical aspect of AI development is data annotation—the process by which humans label vast amounts of data to train machine learning models. This process can involve a wide variety of tasks, such as identifying objects in images, transcribing audio files, or even tagging emotions in text. These seemingly mundane tasks are crucial for enabling AI systems to understand and interpret the world in human terms, allowing them to make decisions, recognize patterns, or generate content.

However, the human labor required to annotate data is often hidden, unpaid, and exploited. The people performing these essential tasks are typically located in low-wage countries, working under precarious conditions with little recognition or compensation. Crawford refers to this phenomenon as ghost labor—work that is invisible to the broader public but indispensable to the functioning of AI systems. These workers, though responsible for ensuring the success of AI, are rarely acknowledged in the larger discourse on AI ethics or development.

The Scope of Data Annotation in AI Development

Data annotation is an essential step in training AI models, and as AI technologies become more advanced and sophisticated, the demand for annotated data has only increased. AI systems require vast amounts of labeled data to learn. For instance, image recognition systems need annotated images where objects are tagged and identified to train the model to recognize similar objects in new images. Similarly, speech recognition systems rely on transcribed audio to improve their ability to convert spoken language into text. These processes, though essential, are not automated and require human intervention to ensure accuracy and consistency in the labeled data.

The scale at which data needs to be annotated is enormous. For example, the ImageNet dataset, one of the most important datasets used for image recognition tasks, contains millions of labeled images. Similarly, modern natural language processing (NLP) models require enormous datasets filled with human-annotated text. The scale of this work is not only vast but also growing rapidly as AI applications become more complex and integrated into various industries like healthcare, retail, security, and entertainment. These systems need millions—or even billions—of examples to train on, which drives up the demand for data annotators.

The Human Cost: Exploitative Conditions and Mental Strain

Despite being the backbone of AI’s functionality, the work involved in data annotation is often undervalued and underpaid. The people doing this work are typically employed in low-wage countries where labor laws are either not enforced or poorly regulated. The workers—many of whom are in developing nations—are often hired by third-party companies that offer little to no job security or employee benefits. These jobs are often outsourced and performed by temporary workers or freelancers who receive low pay for long hours of work.

Data annotation is a highly monotonous and time-consuming task. Annotators are required to review and label large quantities of data, often without much variation in the work. For example, image annotators may spend hours labeling pictures with little intellectual engagement or creativity. This repetitive nature can lead to mental fatigue, boredom, and even burnout. But the toll doesn’t end there. In many cases, workers are forced to review graphic or disturbing content—such as violent images, explicit videos, or sensitive medical data—especially in fields like content moderation or medical AI. The exposure to such content can have a profound psychological impact, leading to anxiety, depression, and emotional exhaustion.

In addition to mental strain, there is also a physical toll on workers. Long hours in front of screens can lead to eye strain, back pain, and other health issues associated with repetitive tasks. Despite this, ghost laborers receive minimal compensation, and their contributions often go unrecognized by the companies that benefit most from their work. Many of these workers never see the impact of their labor, as their contributions are anonymized and not celebrated or credited in the final product or system.

The Ethics of Ghost Labor: A Call for Greater Transparency

The ethics of ghost labor are a core issue that Kate Crawford addresses in Atlas of AI. While AI companies often claim to champion innovation, diversity, and responsibility, the exploitation of data annotators represents a glaring contradiction in their operations. Crawford highlights how AI companies are complicit in perpetuating these exploitative working conditions, often turning a blind eye to the human toll that sustains their models.

The reality of unpaid labor and poor working conditions behind AI development is a serious ethical concern. Crawford argues that these issues are systemic and should not be ignored or treated as collateral damage in the rush toward AI advancement. The marginalization of data annotators, who are usually from vulnerable and low-income communities, reveals a pattern of inequality and exploitation embedded in the AI industry.

Crawford calls for greater transparency in the AI supply chain and urges companies to fairly compensate workers who contribute to the creation of their models. She advocates for a more equitable distribution of the benefits that come from AI advancements. Companies must take responsibility for the workers who are at the bottom of the AI value chain. This includes offering fair wages, better working conditions, and mental health support for data annotators. It also means recognizing and valuing the contribution of human labor in building AI systems, not simply treating it as an invisible and undervalued task.

Toward Fair Compensation and Accountability

The exploitation of ghost labor is a complex issue, but it is not an intractable one. Crawford suggests that the AI industry needs to shift towards a fairer model—one that ensures equitable compensation for the people who make AI systems possible. This would require companies to invest in ethical labor practices, including worker representation, improved working conditions, and psychological support for those who are exposed to harmful content.

Moreover, companies should be transparent about the labor behind AI development, publicly acknowledging the human toll associated with their technologies. Implementing ethical sourcing practices, where companies not only ensure fair pay but also actively work to improve the welfare of their labor force, will be essential in creating a more just and sustainable AI ecosystem. Crawford’s work serves as a wake-up call to the tech industry to rethink its reliance on ghost labor and to take real steps toward ensuring that AI’s benefits are distributed equitably across both human workers and the environment.

End-of-Life E-Waste Impact Globally

As AI technologies continue to evolve, so does the amount of electronic waste (e-waste) generated. The rapid pace of innovation means that hardware becomes obsolete faster than ever, leading to an ever-growing mountain of discarded electronic devices. These devices, ranging from smartphones and computers to data center equipment, end up in landfills where they contribute to environmental pollution.

The impact of e-waste is global, affecting not only the landfills in developing nations, where much of the world’s discarded electronics are shipped, but also contributing to toxic chemicals leaching into the environment. The mining of minerals and production of electronics required to build AI hardware generates massive amounts of waste that end up in our ecosystems. Hazardous substances like lead, mercury, and cadmium are commonly found in e-waste, posing serious health risks to those involved in the recycling process and surrounding communities.

Crawford’s Atlas of AI brings to light the cycle of consumption that surrounds AI technologies—from the initial mining of raw materials to the disposal of obsolete hardware. As AI adoption continues to grow, so will the environmental impact of the technologies behind it. Crawford calls for a rethinking of the entire lifecycle of AI hardware, encouraging manufacturers to design for longevity, increase recyclability, and limit environmental damage through responsible practices.

Combining Openness with Accountability for Long-Term Sustainability

Atlas of AI is a must-read for leaders aiming to align their AI strategy with environmental and labor responsibility. Kate Crawford’s exploration of AI’s hidden costs provides a comprehensive look at the consequences of a technology that is often celebrated for its potential to revolutionize industries. However, this potential comes at a significant environmental and social cost—one that cannot be ignored.

Crawford’s call to action is clear: AI development must be transparent and ethically responsible. By shedding light on the often unseen impacts of mineral extraction, energy consumption, unpaid labor, and e-waste, Crawford challenges the AI community to take responsibility for the far-reaching effects of its technologies. Companies must work to balance open innovation with accountability to ensure that AI’s future is sustainable and ethically aligned. Transparency in how AI models are developed, where the resources come from, and how they impact both people and the planet is essential for creating an AI ecosystem that benefits everyone.

For leaders in technology, business, and policy, Atlas of AI offers an essential framework for understanding the complex interplay between innovation, ethics, and sustainability in the AI industry. As we continue to push forward with the development of AI, it is crucial that we do so with a sense of responsibility, ensuring that openness and accountability work hand in hand for the long-term sustainability of both the industry and the planet.


Works Cited

Crawford, K. (2021). Atlas of AI: Mapping the Dark Side of Artificial Intelligence. Yale University Press.

The Guardian. (2021). The Hidden Costs of Artificial Intelligence: What You Don’t See Behind the Models. Theguardian.com. Retrieved from https://www.theguardian.com

Sequoia Capital. (2022). Hugging Face: A New Vision for Open-Source AI. Sequoiacap.com. Retrieved from https://www.sequoiacap.com

Annenberg School for Communication. (2020). AI and Labor: Exploring the Ethics of Data Annotation. Annenberg.usc.edu. Retrieved from https://annenberg.usc.edu

Klover.ai. “From Echo to Exhibit: Anatomy of an AI System and Art as Advocacy.” Klover.ai, https://www.klover.ai/from-echo-to-exhibit-anatomy-of-an-ai-system-and-art-as-advocacy/.

Klover.ai. “Why AI Ethics Must Confront Environmental and Labor Justice.” Klover.ai, https://www.klover.ai/why-ai-ethics-must-confront-environmental-and-labor-justice/.

Klover.ai. “Kate Crawford.” Klover.ai, https://www.klover.ai/kate-crawford/.

Subscribe To Our Newsletter

Get updates and learn from the best

More To Explore

Ready to start making better decisions?

drop us a line and find out how

Klover.ai delivers enterprise-grade decision intelligence through AGD™—a human-centric, multi-agent AI system designed to power smarter, faster, and more ethical decision-making.

Contact Us

Follow our newsletter

    Decision Intelligence
    AGD™
    AI Decision Making
    Enterprise AI
    Augmented Human Decisions
    AGD™ vs. AGI

    © 2025 Klover.ai All Rights Reserved.

    Cart (0 items)

    Create your account