How to Use AI Without Compromising Your Ethics
- Paula Stäbler
- Apr 18, 2024
- 8 min read
Updated: May 2, 2024

We want to spend our time doing what we love rather than what we must do; using artificial intelligence to make our lives easier and faster is very tempting. Since the humanities have always been under constant risk – by people who value existing culture, but not the artists behind that culture; by governments wanting their citizens to work in “productive” and economically “useful” jobs; and now by AI – the fear emerges quickly that using artificial intelligence will threaten our existence even more. The articles on this site are trying to prove that using AI within the humanities doesn’t have to end in disaster. Instead, they can be a helpful tool that will not replace but assist human genius. I will uncover and untangle what we need to understand, ethically, if we want to use AI. Some of the implications that follow AI use are hard to swallow; they confront our morals, and we must ask ourselves whether we can live with these consequences.
Visual, audial, and literary artists fear two things the most: replacement and plagiarism. The emergence of AI increases this worry – now we are not only threatened by other humans but by machines as well. Marcus du Sautoy, a science and mathematics professor at the University of Oxford, argues that “[t]he art of learning from what artists have done in the past and using that knowledge to push into the new is of course the process which most human artists go through,” and AI algorithms are utilising and accelerating that process (McLoughlin, 2024). AI being “inspired” by others’ work (i.e. data sets) is therefore only a continuation of human behaviour. But it’s not that easy, is it?
To train generative AI, its programmers and users feed it information, including large amounts of written texts ranging from Wikipedia entries to entire books. Alex Reisner, a writer and programmer, wrote a series for The Atlantic on a data set called Books3. He reveals shocking information: “Pirated books are being used as inputs for computer programs that are changing how we read, learn, and communicate. The future promised by AI is written with stolen words” (Reisner, 2023). The data set contains more than 191,000 books – without the authors’ or publishers’ permission or knowledge. Reisner explains that AI analyses “the relationship among words in intelligent-sounding language,” learning “information about how to construct long, thematically consistent paragraphs” (2023). Although this use is technically legal – the “free use” clause permits derivative works – ethically, it is a questionable practice.
It is crucial to remember that the authors whose books are part of the data set were unaware. Once they found out, a lot of them expressed fury. Novelist Lauren Groff wrote, “I would never have consented for Meta to train AI on any of my books” (Nguyen, 2023). Writer Ingrid Rojas Contreras expressed desperation, saying that writing her novel Fruit of the Drunken Tree meant “7 yrs of sweat, blood, midnight joys, pleadings at the beach, ablutions in the bath, panic attacks, poring over dictionaries, thinking how to do it how to tell it, this was surviving things that nearly destroyed me – and they just took it?” Copyright and licensing laws exist so authors and publishers can decide who owns their works and what is being done with them. Since the authors have not consented, they consider their works stolen and the creators of Books3 to be thieves. Developers of generative AI bypass these laws: instead of asking the copyright holders for permission and monetarily compensating them, they use pirated versions, highlighting how little society values artists and authors who lose out on royalties and income.

Certain generative AI programmes learn from those more than 191,000 books, meaning they take chunks of their words and reassemble them, including opinions, narratives, and biases. As computer scientist Timnit Gebru explains, “Society’s views of race and gender are encoded into the AI systems” (2020, p. 259). In the case of Books3, this includes the ideologies of Erick von Däniken, who wrote pseudo-scientific books proposing that aliens built the pyramids, and of John F. MacArthur, who promotes right-wing politics, including anti-queer and anti-feminist rhetoric, and Young Earth creationism, a belief that opposes evolution (Reisner, 2023). The majority of content also favours the Western canon. AI softwares receive this unfiltered information from the internet, which leads to discrimination based on what they find.
“[I]n word embeddings […], African American names are more associated with unpleasant concepts like sickness,” Gebru says, “whereas European American names are associated with pleasant concepts like flowers. […] sentiment analysis tools often classify texts pertaining to LGBTQ+ individuals as negative” (2020, p. 265). It would be careless and harmful to use AI instead of human writers if these biases are taken over and repeated. Gebru states that most AI research is done in Western countries and by companies like Facebook and Google. This means that their priorities focus on Western research, concepts, and languages. While this is clearly problematic, it also has disastrous real-life consequences. In 2017, various newspapers, including the Guardian and The Independent, reported that Israeli authorities arrested a Palestinian worker for writing “good morning” in Arabic. The automated Facebook Translate had translated it to “hurt them” in English and “attack them” in Hebrew (2020, p. 264). Although translation models have been developed since 2017, the stereotype of Arab-speaking (and Muslim) people as terrorists continues to be reiterated. This incident exemplifies that we cannot rely solely on machine translation but must include professional human translators who can review the programmes and their output.
When AI is used for research – since it is commonly seen as quicker and more efficient – there can often be incorrect or misleading results, called “hallucinations”. These hallucinations can take the form of non-existent sources, for example. Because of AI’s generative model that bases its outputs on “estimates of semantic similarity,” AI can mix false information with facts (Emsley, 2023). Robin Emsley, medical doctor and editor of the magazine Schizophrenia, found that the references ChatGPT provided him with were untraceable. Previous studies support his experience, showing that “out of 178 references cited, 69 did not have a DOI, 28 of which were found not to exist” and “out of 115 references […], 47% were fabricated, 46% were authentic but inaccurate, and only 7% were authentic and accurate” (Emsley, 2023). If the aim of AI is to assist us and make our lives easier, we have to be able to trust it. If we were to trust it, not questioning or fact-checking the information it provides, we would spread mainly false or inaccurate information. For those choosing to use AI in their research, fact-checking is essential. As Emsley did, writers must trace back their sources to verify that presented papers exist and that their content is correct. Fact-checking is crucial but elongates the writing process. A more efficient approach that ensures accuracy would be using AI for research ideas – prompting it to give research questions and asking it to develop a plan for an article to target the research.
Bias and misinformation are AI’s two most significant shortcomings, but the research into and continued use of AI have greater societal implications. Although we might not necessarily lose our jobs, we will continue to be underpaid and, to some extent, be replaced by machines (using machines is cost- and time-efficient for employers). Forbes writer Dani Di Placido summarises it well: “Generative AI threatens the livelihood of artists, pitting their labor against the cheap slop produced by dead machines. The technology only benefits those who wish to produce content as quickly and cheaply as possible, by removing artists from the creative process” (2023). When WGA members went on strike last year, they wanted regulations to “prevent raw, AI-generated storylines or dialogue from being regarded as ‘literary material’” (Dalton, 2023). Similarly, the SAG-AFTRA strike was partly because of missing provisions on artificial intelligence (studios could use their “voice, likeness or performance” without permission or compensation).

During negotiations, the union signed with AI voice technology company Replica Studios, agreeing that a digital replica of SAG-AFTRA members’ voices could be created and used for video game development. While they stated this had been done with voice actors’ approval, many of those affected refuted this on X, saying the union “betrayed” them and that “no voice actor would willingly approve this” (Murray, 2024). However, after the WGA/SAG-AFTRA strikes, the overall emotion should be tentative optimism rather than continued dread.
Brian Merchant declares, “the humans won” in the LA Times (2023). The union members stayed persistent, not accepting the studios’ proposals but fighting for their talent. Merchant explains that artists fear that “films and series colored by real-life experience that explored the human experience […] art” would get lost, showing that AI is not good enough to replace human artists. What we have on machines is actual lived experience, genuine emotions, and a desire to create something. Humanity is our power, and like the strikes’ results showed, “[t]here is great power in drawing a hard line, in refusing to let a boss use technology to erase your job, in speaking up about how you would not like technology to shape your life,” as Merchant eloquently argues.

Artists are consumers as well and have the option to determine how and to what extent technology is used. We can say “no” to data sets using stolen work and advocate for joint and consensual development of AI, which includes having open discussions about the use of AI involving everyone, not just the investors who have capitalist gain as their primary interest. The Ada Lovelace Institute, an independent research organisation pushing for technology to service society, recommends making AI accessible “to a wider public […], afford[ing] ordinary citizens the opportunity to articulate their views in dialogue with others” (Tasioulas, 2021). Similarly, IBM has developed five pillars to ensure the ethical use of AI:
explainability (how and with what information an algorithm has been developed)
fairness (equal treatment by countering human bias and promoting inclusivity)
robustness (minimising security risk and confidence in systems)
transparency (disclosing what data is collected and used, ensuring users’ ability to assess benefits/limits)
privacy (the systems’ prioritisation and safeguarding of consumers’ rights).
As individuals, we must defend these values and help to develop them further. Students should minimise and optimise their use of AI. Instead of mindless copy-pasting, every AI-generated text needs to be fact-checked, edited and adjusted by humans, filling them with human life and personality. Although studios could use voice actors’ existing material to create a new dialogue, only a human can provide the necessary intonation to make it believable. To ensure ethical interaction with the technology and our audience, transparency around individual AI use is crucial. We need to reference when and how AI has assisted us. For example, this could include “ChatGPT created an outline for this article” or “The pictures used in this article have been generated by Microsoft Image Creator”.
Overall, students will need to decide for themselves whether and to what extent they want to use AI. However, every use needs to come with an understanding of the consequences. If we can comprehend the repercussions of frequent AI use, we can argue for the essential need for human artists to be included in the process.
References:
Emsley, R. (2023) ‘ChatCPT: these are not hallucinations – they’re fabrications and falsifications’, Schizophrenia, 9(52). Available at: https://doi.org/10.1038/s41537-023-00379-4.
Dalton, A., and Associated Press (2023) ‘Writers Strike: Why A.I. is such a hot-button issue in Hollywood’s labor battle with SAG-AFTRA’, Fortune, 24 July. Available at: https://fortune.com/2023/07/24/sag-aftra-writers-strike-explained-artificial-intelligence/ (Accessed: 14 April 2024).
Di Placido, D. (2023) ‘The Problem of AI-Generated At, Explained’, Forbes, 30 December. Available at: https://www.forbes.com/sites/danidiplacido/2023/12/30/ai-generated-art-was-a-mistake-and-heres-why/ (Accessed: 14 April 2024).
Gebru, T. (2020) ‘Race and Gender’, in M. D. Dubber, F. Pasquale, and S. Das (eds.) The Oxford Handbook of Ethics of AI (Online, Oxford Academic), pp. 252-269. Available at: https://doi.org/10.1093/oxfordhb/9780190067397.013.16.
IBM (2023) ‘IBM Artificial Intelligence Pillars’, 30 August. Available at: https://www.ibm.com/policy/ibm-artificial-intelligence-pillars/ (Accessed: 8 April 2024).
McLoughlin, J. (2024) ‘The Work of Art in the Age of Artificial Intelligibility’, AI & Society. Available at: https://doi.org/10.1007/s00146-023-01845-4.
Merchant, B. (2023) ‘Column: The Writers Strike was the first workplace battle between humans and AI. The humans won’, LA Times, 25 September. Available at: https://www.latimes.com/business/technology/story/2023-09-25/column-sag-aftra-strike-writers-victory-humans-over-ai (Accessed: 18 April 2024).
Murray, C. (2024) ‘Video Game Voice Actors Criticize SAG-AFTRA Over Agreement With AI Company’, Forbes, 10 January. Available at: https://www.forbes.com/sites/conormurray/2024/01/10/video-game-voice-actors-criticize-sag-aftra-over-agreement-with-ai-company/ (Accessed: 18 April 2024).
Nguyen, S. (2023) ‘Some writers are furious that AI consumed their books. Others? Less so’, Washington Post, 29 September. Available at: https://www.washingtonpost.com/books/2023/09/29/authors-ai-copyright-books3-reactions/ (Accessed: 13 April 2024).
Reisner, A. (2023) ‘Revealed: The Authors Whose Pirated Books are Powering Generative AI’, The Atlantic, 19 August. Available at: https://www.theatlantic.com/technology/archive/2023/08/books3-ai-meta-llama-pirated-books/675063/ (Accessed: 13 April 2024).
—— (2023) ‘What I Found in a Database Meta Uses to Train Generative AI’, The Atlantic, 25 September. Available at: https://www.theatlantic.com/technology/archive/2023/09/books3-ai-training-meta-copyright-infringement-lawsuit/675411/ (Accessed: 13 April 2024).
Tasioulas, J. (2021) ‘The role of the arts and humanities in thinking about artificial intelligence (AI)’, Ada Lovelace Institute, 12 June. Available at: https://www.adalovelaceinstitute.org/blog/role-arts-humanities-thinking-artificial-intelligence-ai/ (Accessed: 8 April 2024).
Comments