The Generative AI Balancing Act: Considering Ethics in Relation to Progress 

a lone tree grows out of computer circuit boardGenerative AI (GAI) has become part of the fabric of higher education. We discuss its use in instruction, research, and daily tasks. What is often left out of this conversation is the impacts of GAI on the environment and labor. These concerns are easy to overlook considering the separation between GAI processes and the user, in addition to the lack of transparency and accessibility from leading tech corporations and data centers. This post is neither meant to solve these issues nor dissuade from using GAI, but rather to provoke critical thinking about the relationship between ethics and GAI. Acknowledging the ethical implications of GAI helps critically gauge when and how to best put it to the task.  

Resource consumption and carbon emissions 

Much of the research about GAI’s environmental impact revolves around training a model. GAI models use energy to power computing equipment, cool equipment, and create outputs. Four models were found to consume between 324 and 1287 MWh for training alone, while the average American household consumes 11 MWh a year. Fresh water is needed to keep the equipment cool since saltwater can corrode the equipment. Training an AI model in US data centers can use 5.4 million liters of water. Data centers that rely on nonrenewable energy sources emit greenhouses gases. Training a large model was shown to emit 284 tons of CO2 compared to the average human emitting approximately 5 tons per year. To put things in perspective, at the beginning of 2024 there were more than 2,000 GAI models available, and this doesn’t include retraining models when new data is available. Models get larger and more complex as AI develops, meaning that more energy and water will be needed and more CO2 will be emitted. Researchers predict that data centers’ electricity use will more than double to 8% by 2030 from 3%, with the average growth for the last 20 years being less 0.5%  

Labor exploitation 

GAI has also had a significant impact on labor practices. Models that employ supervised learning, like ChatGPT, utilize human staff to help train models on appropriate responses to queries, including filtering out violent, harmful, or discriminatory data. Aside from the psychological trauma that workers experienced due to filtering this content, OpenAI, the parent company of ChatGPT, were paying Kenyan employees $1.46 to $3.74 a hour for this labor. Global AI supply chains habitually outsource labor to the global south, with Open AI moving the majority of its data labor to different African countries. Cobalt, an important mineral needed for AI technology, is mined primarily in the Democratic Republic of Congo where issues with access to clean water and safe food continue, and even if these conditions are not totally attributed to mining minerals for GAI, “profit-exportation by foreign multi-nationals” exacerbates social issues. Low wages and poor working conditions in GAI tech companies also plague workers in the United States. In 2018, researchers found that 96% of workers on Amazon Mechanical Turk, a platform that offers job opportunities similar to the data cleaning needed for supervised learning, made below the federal minimum wage. And that’s before ChatGPT and other GAI tools rose to popularity in 2022.  

What can be done? 

There is no doubt that GAI has changed our lives. It can do amazing things that free up our time and allow us to pursue other activities. But it’s important to keep in mind that GAI usage isn’t free just because we can’t see what it costs. We’ve explored just two facets of a much larger ethical conversation which also includes privacy and security, copyright, bias, and so many other issues. Even if this feels insurmountable, what we can do is start including ethics in our decisions about GAI. Researchers, academics, and activists are doing work to make sure that tech corporations and data centers aren’t the only ones running the GAI conversation. Here are a few articles to read that support critical thinking about GAI ethics:  

  • Ridley, M. (2024). Human‐centered explainable artificial intelligence: An Annual Review of Information Science and Technology (ARIST) paper. Journal of the Association for Information Science and Technology. https://doi.org/10.1002/asi.24889 
  • Tacheva, J., & Ramasubramanian, S. (2023). AI empire: Unraveling the interlocking systems of oppression in generative AI’s global order. Big Data & Society, 10(2). https://doi.org/10.1177/20539517231219241
  • Thomas, C., Roberts, H., Mökander, J., Tsamados, A., Taddeo, M., & Floridi, L. (2024). The case for a broader approach to AI assurance: Addressing “hidden” harms in the development of artificial intelligence. AI & Society. https://doi.org/10.1007/s00146-024-01950-y

 

This post was written by Julia Anderson, a social sciences research librarian at Fondren Library. Julia leads workshops on generative AI tools for research and spends a lot of time reading about the intersections of ethics and generative AI.