Mastodon zirk.us Mastodon dice.camp Research | Trystan Goetze 2023
top of page
Image by Dan Burton

 MORAL RESPONSIBILITY 

Image by Sibulele Mkontwana

 EPISTEMIC INJUSTICE 

Image by NASA

 ETHICS OF TECHNOLOGY 

Research

My areas of research specialization are:

  • Meta-ethics (with a focus on responsibility)

  • Social epistemology (with a focus on epistemic injustice)

  • Ethics of technology (with a focus on computing and artificial intelligence)

  • Philosophy and practice of education (with a focus on interdisciplinary ethics education)

​

My research has multiple strands, but is unified by my interest in the various ways we use the concept of responsibility. I've written articles on the following topics:

  • Whether we are responsible for how we think, particularly how we categorize people and social experiences.

  • How we should take responsibility for failures to properly understand others.

  • What it means to be a responsible technologist, and how technology companies should be held to account when their innovations cause harm.

  • Ethics education in computer science and other technical disciplines of study.

​

I am currently working on research projects motivated by the following questions:

  • Is generative AI built on the theft of creative labour?

  • Are we morally responsible for how we think?

  • How can educators best leverage material from normative disciplines in STEM courses?

​

All of my research publications are listed below, with typescript PDFs to download and links to the versions of record.

Preprints

Last updated 10 Jan 2024

Download options:

AI Art is Theft: Labour, Extraction, and Exploitation, Or, On the Dangers of Stochastic Pollocks 🎨

Since the launch of applications such as DALL-E, Midjourney, and Stable Diffusion, generative artificial intelligence has been controversial as a tool for creating artwork. While some have presented longtermist worries about these technologies as harbingers of fully automated futures to come, more pressing is the impact of generative AI on creative labour in the present. Already, business leaders have begun replacing human artistic labour with AI-generated images. In response, the artistic community has launched a protest movement, which argues that AI image generation is a kind of theft. This paper analyzes, substantiates, and critiques these arguments, concluding that AI image generators involve an unethical kind of labour theft. If correct, many other AI applications also rely upon theft.

Publications

2023

The Moral Psychology of Trust.

eds. David Collins, Iris V. Jovanović, and Mark Alfano. Lexington Books.

LinkWix.png
Okay, Google, Can I Trust You? An Anti-Trust Argument for Antitrust

In this chapter, I argue that it is impossible to trust the Big Tech companies, in an ethically important sense of trust. The argument is not that these companies are untrustworthy. Rather, I argue that the power to hold the trustee accountable is a necessary component of this sense of trust, and, because these companies are so powerful, they are immune to our attempts, as individuals or nation-states, to hold them to account. It is, therefore, literally impossible to trust Big Tech. After introducing the accounts of trust and power that I deploy, I argue that Big Tech companies have four kinds of power that render them unaccountable: fiscal power, political power, data power, and cognitive power. I conclude by reflecting on recent calls to break up the Big Tech firms, suggesting a new antitrust test in the light of my arguments.

2023

Proceedings of the 54th ACM Technical Symposium on Computer Science Education.

LinkWix.png
Integrating Ethics into Computer Science Education: Multi-, Inter-, and Transdisciplinary Approaches

While calls to integrate ethics into computer science education go back decades, recent high-profile ethical failures related to computing technology by large technology companies, governments, and academic institutions have accelerated the adoption of computer ethics education at all levels of instruction. Discussions of how to integrate ethics into existing computer science programmes often focus on the structure of the intervention—embedded modules or dedicated courses, humanists or computer scientists as ethics instructors—or on the specific content to be included—lists of case studies and essential topics to cover. While proponents of computer ethics education often emphasize the importance of closely connecting ethical and technical content in these initiatives, most do not reflect in depth on the variety of ways in which the disciplines can be combined. In this paper, I deploy a framework from cross- disciplinary studies that categorizes academic projects that work across disciplines as multidisciplinary, interdisciplinary, or transdisciplinary, depending on the degree of integration. When applied to computer ethics education, this framework is orthogonal to the structure and content of the initiative, as I illustrate using examples of dedicated ethics courses and embedded modules. It therefore highlights additional features of cross-disciplinary teaching that need to be considered when planning a computer ethics programme. I argue that computer ethics education should aim to be at least interdisciplinary—multidisciplinary initiatives are less aligned with the pedagogical aims of computer ethics—and that computer ethics educators should experiment with fully transdisciplinary education that could transform computer science as a whole for the better. 

2022

The Philosophy of Fanaticism: Epistemic, Affective, and Political Dimensions, eds. Leo Townsend, Ruth Rebecca Tietjen, Hans Bernhard Schmid & Michael Staudigl. Routledge.

LinkWix.png
Hermeneutical Justice for Extremists?
Co-author: Charlie Crerar.

When we encounter extremist rhetoric, we often find it dumbfounding, incredible, or straightforwardly unintelligible. For this reason, it can be tempting to dismiss or ignore it, at least where it is safe to do so. The problem discussed in this paper is that such dismissals may be, at least in certain circumstances, epistemically unjust. Specifically, it appears that recent work on the phenomenon of hermeneutical injustice compels us to accept two unpalatable conclusions: first, that this failure of intelligibility when we encounter extremist rhetoric may be a manifestation of a hermeneutical injustice; and second, that remedying this injustice requires that we ought to become more engaged with and receptive of extremist worldviews. Whilst some theorists might interpret this as a reductio of this framework of epistemic in/justice, we push back against this conclusion. First, we argue that with a suitably amended conception of hermeneutical justice—one that is sensitive to the contextual nature of our hermeneutical responsibilities, and to the difference between understanding a worldview and accepting it—we can bite the bullet and accept that certain extremists are subject to hermeneutical injustice, but without committing ourselves to any unpalatable conclusions about how we ought to remedy these injustices. Second, we argue that bringing the framework of hermeneutical in/justice to bear upon the experience of certain extremists actually provides a new and useful perspective on one of the causes of extremism, and how it might be undermined.

2022

Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’22).

LinkWix.png
Mind the Gap: Autonomous Systems, the Responsibility Gap, and Moral Entanglement

When a computer system causes harm, who is responsible? This question has renewed significance given the proliferation of autonomous systems enabled by modern artificial intelligence techniques. At the root of this problem is a philosophical difficulty known in the literature as the responsibility gap. That is to say, because of the causal distance between the designers of autonomous systems and the eventual outcomes of those systems, the dilution of agency within the large and complex teams that design autonomous systems, and the impossibility of fully predicting how autonomous systems will behave once deployed, determining who is morally responsible for harms caused by autonomous systems is unclear at a conceptual level. I review past work on this topic, criticizing prior works for suggesting workarounds rather than philosophical answers to the conceptual problem presented by the responsibility gap. The view I develop, drawing on my earlier work on vicarious moral responsibility, explains why computing professionals are ethically required to take responsibility for the systems they design, despite not being blameworthy for the harms these systems may cause.

2021

Social Epistemology Review and Reply Collective 10 (9): 36-43.

LinkWix.png
Anticipation, Smothering, and Education: A Reply to Lee and Bayruns García on Anticipatory Epistemic Injustice

When you expect something bad to happen, you take action to avoid it. That is the principle of action that underlies J. Y. Lee’s recent paper (2021), which presents a new form of epistemic injustice that arises from anticipating negative consequences for testifying. In this brief reply article occasioned by Lee’s essay, I make two main contributions to the discussion of this idea. The first (§§2–3) is an intervention in the discussion between Lee and Eric Bayruns García regarding the relationship between anticipatory epistemic injustice and Kristie Dotson’s concept of testimonial smothering. The second (§§4–5) is to expand the concept of anticipatory epistemic injustice into the educational context, and to illuminate yet another form of anticipatory epistemic injustice, which I call anticipatory zetetic injustice (§6).

2021

WebSci '21: Proceedings of the 13th Annual ACM Web Science Conference (Companion Volume)

LinkWix.png
Bigger Isn’t Better: The Ethical and Scientific Vices of Extra-Large Datasets in Language Models

Co-author: Darren Abramson.

 

The use of language models in Web applications and other areas of computing and business have grown significantly over the last five years. One reason for this growth is the improvement in performance of language models on a number of benchmarks — but a side effect of these advances has been the adoption of a “bigger is always better” paradigm when it comes to the size of training, testing, and challenge datasets. Drawing on previous criticisms of this paradigm as applied to large training datasets crawled from pre-existing text on the Web, we extend the critique to challenge datasets custom-created by crowdworkers. We present several sets of criticisms, where ethical and scientific issues in language model research reinforce each other: labour injustices in crowdwork, dataset quality and inscrutability, inequities in the research community, and centralized corporate control of the technology. We also present a new type of tool for researchers to use in examining large datasets when evaluating them for quality.

2021

The Monist 104 (2): 210-223.

LinkWix.png
Moral Entanglement: Taking Responsibility and Vicarious Responsibility

Vicarious responsibility is sometimes analysed by considering the different kinds of agents involved—who is vicariously responsible for the actions of whom? In this paper, I discuss vicarious responsibility from a different angle: in what sense is the vicarious agent responsible? I do this by considering the ways in which one may take responsibility for events caused by another agent or process. I discuss three senses of taking responsibility—accepting fault, assuming obligations, and fulfilling obligations—and the forms of vicarious responsibility that correspond to these. I end by explaining how to judge which sense applies in a given case, based on the degree of (what I call) moral entanglement between the agent and what they should take responsibility for.

2021

Inquiry: An Interdisciplinary Journal of Philosophy 64 (1-2): 20-45. 

LinkWix.png
Conceptual Responsibility

Conceptual engineering is concerned with the improvement of our concepts. The motivating thought behind many such projects is that some of our concepts are defective. But, if to use a defective concept is to do something wrong, and if to do something wrong one must be in control of what one is doing, there might be no defective concepts, since we typically are not in control of our concept use. To address this problem, this paper turns from appraising the concepts we use to appraising the people who use them. First, I outline several ways in which the use of a concept can violate moral standards. Second, I discuss three accounts of moral responsibility, which I call voluntarism, rationalism, and psychologism, arguing that each allows us to find at least some cases where we are responsible for using defective concepts. Third, I answer an objection that because most of our concepts are acquired through processes for which we are not responsible, our use of defective concepts is a matter of bad luck, and not something for which we are responsible after all. Finally, I conclude by discussing some of the ways we may hold people accountable for using defective concepts.

2019

Danish Yearbook of Philosophy 52 (1): 61-81. 

LinkWix.png
The Concept of a University: Theory, Practice, and Society

Current disputes over the nature and purpose of the university are rooted in a philosophical divide between theory and practice. Academics often defend the concept of a university devoted to purely theoretical activities. Politicians and wider society tend to argue that the university should take on more practical concerns. I critique two typical defenses of the theoretical concept—one historical and one based on the value of pure research—and show that neither the theoretical nor the practical concept of a university accommodates all the important goals expected of university research and teaching. Using the classical pragmatist argument against a sharp division between theory and practice, I show how we can move beyond the debate between the theoretical and practical concepts of a university, while maintaining a place for pure and applied research, liberal and vocational education, and social impact through both economic applications and criticism aimed at promoting social justice.

2018

Harms & Wrongs in Epistemic Practice: Royal Institute of Philosophy Supplement 84: 1-21. Cambridge University Press

LinkWix.png
Harms and Wrongs in Epistemic Practice
Co-authors: Simon Barker & Charlie Crerar.

This volume has its roots in two recent developments within mainstream analytic epistemology: a growing recognition over the past two or three decades of the active and social nature of our epistemic lives; and, more recently still, the increasing appreciation of the various ways in which the epistemic practices of individuals and societies can, and often do, go wrong. The theoretical analysis of these breakdowns in epistemic practice, along with the various harms and wrongs that follow as a consequence, constitutes an approach to epistemology that we refer to as non-ideal epistemology. In this introductory chapter we introduce and contextualise the ten essays that comprise this volume, situating them within four broad sub-fields: vice epistemology, epistemic injustice, inter-personal epistemic practices, and applied epistemology. We also provide a brief overview of several other important growth areas in non-ideal epistemology.

2018

Edited volume collecting peer-reviewed papers from the 2017 Departmental Conference of the Royal Institute of Philosophy.

LinkWix.png
Harms and Wrongs in Epistemic Practice: Royal Institute of Philosophy Supplement 84

Co-editors: Simon Barker & Charlie Crerar.

Contributors: Alison Bailey, Olivia Bailey, Simon Barker, Heather Battaly, Havi Carel, Quassim Cassam, Charlie Crerar, Miranda Fricker, Trystan S. Goetze, Heidi Grasswick, Keith Harris, Casey Rebecca Johnson, Ian James Kidd, and Alessandra Tanesini.

How we engage in epistemic practice, including our methods of knowledge acquisition and transmission, the personal traits that help or hinder these activities, and the social institutions that facilitate or impede them, is of central importance to our lives as individuals and as participants in social and political activities. Traditionally, Anglophone epistemology has tended to neglect the various ways in which these practices go wrong, and the epistemic, moral, and political harms and wrongs that follow. In the past decade, however, there has been a turn towards the non-ideal in epistemology. This volume gathers new works by emerging and world-leading scholars on a significant cross section of themes in non-ideal epistemology. Articles focus on topics including intellectual vices, epistemic injustices, interpersonal epistemic practices, and applied epistemology. In addition to exploring the various ways in which epistemic practices go wrong at the level of both individual agents and social structures, the papers gathered herein discuss how these problems are related, and how they may be addressed.

2018

Hypatia 33 (1):73-90.

LinkWix.png
Hermeneutical Dissent and the Species of Hermeneutical Injustice

According to Miranda Fricker, a hermeneutical injustice occurs when there is a deficit in our shared tools of social interpretation, such that marginalized social groups are at a disadvantage in making sense of their distinctive and important experiences. Critics have claimed that Fricker's account ignores or precludes a phenomenon I call hermeneutical dissent, where marginalized groups have produced their own interpretive tools for making sense of those experiences. I clarify the nature of hermeneutical injustice to make room for hermeneutical dissent, clearing up the structure of the collective hermeneutical resource and the fundamental harm of hermeneutical injustice. I then provide a more nuanced account of the hermeneutical resources in play in instances of hermeneutical injustice, enabling six species of the injustice to be distinguished. Finally, I reflect on the corrective virtue of hermeneutical justice in light of hermeneutical dissent.

2022

The Philosophical Quarterly 72 (1): 240-243.

LinkWix.png
Book Review: Games: Agency as Art, by C. Thi Nguyen.

As the title elegantly expresses, this book begins with games, then follows the insight that there is an art of agency down many different lines of inquiry. Remarkably wide-ranging, the book is packed with observations on many topics of interest to philosophy of action, aesthetics and philosophy of art, social philosophy, art criticism, game studies, and game design. [...]

2016

Journal of Applied Philosophy 33 (3): 344-346.

LinkWix.png
Book Review: The Limits of Knowledge: Generating Pragmatist Feminist Cases for Situated Knowing by Nancy Arden McHugh.

Part epistemological critique, part investigative reporting, and part manifesto for social change, McHugh’s project is to tackle the question of how health science research can best contribute to the flourishing of marginalized communities. Each chapter is structured around a community case study, through which she introduces and synthesizes arguments from John Dewey and feminist philosophy to interrogate epistemological issues in health research, practice, and policy. The general theme is the tension between what scientists and laypeople want from science, and the consequent failure of science to improve the lives of marginalized groups. [...]

bottom of page