
RESEARCH
My main research projects are all unified, in some way or another, by my interest in responsibility. In my dissertation, I explored the possibility that we may be morally responsible for how we use concepts in our thinking, and connected this to epistemic injustice. More recently, I have written about vicarious responsibility, technologists' responsibility for harms caused by their innovations, and how the fact that large tech companies are too powerful to be meaningfully held responsible for their actions means that cannot trust them. I am also interested in the philosophy of education, particularly through the work of John Dewey, and the scholarship of learning and teaching, especially teaching ethics and other philosophical topics to computer science students.
Current projects
EPISTEMIC INJUSTICE
Epistemic injustices are instances where people are harmed in their capacity as knowers. For example, when a woman's testimony is dismissed because she is a woman and not because of the content of her testimony, she experiences a form of epistemic injustice. I am particularly interested in hermeneutical injustice, which arises when a marginalized subject is unable to make some important aspect of their social experience intelligible. This might happen when trying to conceive of their own experience, or when trying to explain their experience to another person. You can read about some of my past work on hermeneutical injustice below.
Currently, I'm working on two papers on epistemic injustice. The first will discuss ways in which various kinds of AI-enabled systems instantiate pre-emptive epistemic injustices. The second draws on my dissertation; I will argue that hermeneutical injustice provides a potent counterexample to the claim that morality has no grip on private thoughts, and discuss the implications.
RESPONSIBILITY AND BLAME
I'm interested in both moral and epistemic responsibility—specifically, in how we hold one another responsible for moral failures or intellectual errors. I've previously published work on how we are sometimes responsible for the concepts that we (non-voluntarily) use in our thinking to categorize people and other aspects of the social world. I've also written about how to make sense of vicarious responsibility, that is, instances where we seem to be in some sense responsible for the actions of others. I've applied this latter work to the case of computing professionals, arguing that they should take responsibility for the harms caused by the technologies they create.
In future work, I hope to extend both of these projects into longer works on the ethics of thinking and professional moral responsibility in computing. More immediately, I'm working on a paper about epistemic blame—in particular, I suspect that epistemic blame per se does not exist. Rather, it is either a form of moral blame, or an inappropriate reaction to inconsequential intellectual errors.
PHILOSOPHY OF COMPUTING
When I first started teaching computer ethics, it reinvigorated my research programme. I've written about coupled ethical and scientific problems in the development of natural language processing models, why the power of Big Tech firms means we cannot trust them, and why computing professionals have a duty to take responsibility for the harms caused by their technologies. Currently, inspired by my connections to the tabletop roleplaying game community and my partner's work in the animation industry, I'm researching AI-generated art. I'm in the early stages of developing a paper arguing that AI art is a form of theft because of their reliance on unlicensed artwork scraped from the web.
SCHOLARSHIP OF LEARNING AND TEACHING
As my teaching informs my research, so I do research on my teaching. I have a longstanding interest in the philosophy of education; I published a form of my Masters thesis on how social conceptions of the university have shifted multiple times between a focus on pure research vs. a focus on practical skills, and how John Dewey's vision of education beyond the theory/practice divide can offer a way forward. I currently have a paper under review which applies a framework from cross-disciplinary studies to help understand different approaches to teaching computer ethics.
Publications
MIND THE GAP: AUTONOMOUS SYSTEMS, THE RESPONSIBILITY GAP, AND MORAL ENTANGLEMENT
June 2022. Online at the ACM Digital Library.
ACM FAccT ’22: Proceedings of the ACM Conference on Fairness, Accountability, and Transparency, 2022.
Abstract: When a computer system causes harm, who is responsible? This question has renewed significance given the proliferation of autonomous systems enabled by modern artificial intelligence techniques. At the root of this problem is a philosophical difficulty known in the literature as the responsibility gap. That is to say, because of the causal distance between the designers of autonomous systems and the eventual outcomes of those systems, the dilution of agency within the large and complex teams that design autonomous systems, and the impossibility of fully predicting how autonomous systems will behave once deployed, determining who is morally responsible for harms caused by autonomous systems is unclear at a conceptual level. I review past work on this topic, criticizing prior works for suggesting workarounds rather than philosophical answers to the conceptual problem presented by the responsibility gap. The view I develop, drawing on recent work on vicarious moral responsibility, explains why computing professionals are ethically required to take responsibility for the systems they design, despite not being blameworthy for the harms these systems may cause.
OK, GOOGLE, CAN I TRUST YOU? AN ANTI-TRUST ARGUMENT FOR ANTITRUST
Forthcoming.
The Moral Psychology of Trust, eds. Mark Alfano, David Collins, and Iris Vidmar Jovanovi, Lexington Books.
Abstract: In this paper, I argue that it is impossible to trust the Big Tech companies, in an ethically important sense of trust. The argument is not that these companies are untrustworthy. Rather, I argue that the power to hold the trustee accountable is a necessary component of this sense of trust, and, because these companies are so powerful, they are immune to our attempts, as individuals or nation-states, to hold them to account. It is, therefore, literally impossible to trust Big Tech. After introducing the accounts of trust and power that I deploy, I argue that Big Tech companies have four kinds of power that render them unaccountable: fiscal power, political power, data power, and cognitive power. I conclude by reflecting on recent calls to break up the Big Tech firms, suggesting a new antitrust test in the light of my arguments.
EPISTEMIC JUSTICE FOR EXTREMISTS?
2022. Online at Taylor & Francis.
The Philosophy of Fanaticism: Epistemic, Affective, and Political Dimensions, eds. Leo Townsend, Ruth Rebecca Tietjen, Hans Bernhard Schmid, and Michael Staudigl, Routledge, pp. 88–108. DOI: 10.4324/9781003119371-7.
Co-author: Charlie Crerar.
Abstract: When we encounter extremist rhetoric, we often find it dumbfounding, incredible, or straightforwardly unintelligible. For this reason, it can be tempting to dismiss or ignore it, at least where it is safe to do so. The problem discussed in this paper is that such dismissals may be, at least in certain circumstances, epistemically unjust. Specifically, it appears that recent work on the phenomenon of hermeneutical injustice compels us to accept two unpalatable conclusions: first, that this failure of intelligibility when we encounter extremist rhetoric may be a manifestation of a hermeneutical injustice; and second, that remedying this injustice requires that we ought to become more engaged with and receptive of extremist worldviews. Whilst some theorists might interpret this as a reductio of this framework of epistemic in/justice, we push back against this conclusion. First, we argue that with a suitably amended conception of hermeneutical justice – one that is sensitive to the contextual nature of our hermeneutical responsibilities, and to the difference between understanding a worldview and accepting it – we can bite the bullet and accept that certain extremists are subject to hermeneutical injustice, but without committing ourselves to any unpalatable conclusions about how we ought to remedy these injustices. Second, we argue that bringing the framework of hermeneutical in/justice to bear upon the experience of certain extremists actually provides a new and useful perspective on one of the causes of extremism, and how it might be undermined.
ANTICIPATION, SMOTHERING, AND EDUCATION: A REPLY TO LEE AND BAYRUNS GARCÍA ON ANTICIPATORY EPISTEMIC INJUSTICE
September 2021. Online at SERRC.
Social Epistemology Review and Reply Collective 10 (9): 36-43.
Abstract: When you expect something bad to happen, you take action to avoid it. That is the principle of action that underlies J. Y. Lee’s recent paper (2021), which presents a new form of epistemic injustice that arises from anticipating negative consequences for testifying. In this brief reply article occasioned by Lee’s essay, I make two main contributions to the discussion of this idea. The first (§§2–3) is an intervention in the discussion between Lee and Eric Bayruns García regarding the relationship between anticipatory epistemic injustice and Kristie Dotson’s concept of testimonial smothering. The second (§§4–5) is to expand the concept of anticipatory epistemic injustice into the educational context, and to illuminate yet another form of anticipatory epistemic injustice, which I call anticipatory zetetic injustice. (§6) concludes.
BIGGER ISN'T BETTER: THE ETHICAL AND SCIENTIFIC VICES OF EXTRA-LARGE DATASETS IN LANGUAGE MODELS
June 2021. Online at ACM Digital Library.
WebSci '21: Proceedings of the 13th ACM Web Science Conference (Companion Volume): 69–75.
Co-author: Darren Abramson.
Abstract: The use of language models in Web applications and other areas of computing and business have grown significantly over the last five years. One reason for this growth is the improvement in performance of language models on a number of benchmarks — but a side effect of these advances has been the adoption of a “bigger is always better” paradigm when it comes to the size of training, testing, and challenge datasets. Drawing on previous criticisms of this paradigm as applied to large training datasets crawled from pre-existing text on the Web, we extend the critique to challenge datasets custom-created by crowdworkers. We present several sets of criticisms, where ethical and scientific issues in language model research reinforce each other: labour injustices in crowdwork, dataset quality and inscrutability, inequities in the research community, and centralized corporate control of the technology. We also present a new type of tool for researchers to use in examining large datasets when evaluating them for quality.
MORAL ENTANGLEMENT: TAKING RESPONSIBILITY AND VICARIOUS RESPONSIBILITY
April 2021. Online at Oxford Journals.
The Monist 104(2): 210–223. Issue on 'Vicarious Responsibility and Circumstantial Luck'.
Abstract: Vicarious responsibility is sometimes analysed by considering the different kinds of agents involved—who is vicariously responsible for the actions of whom? In this paper, I discuss vicarious responsibility from a different angle: in what sense is the vicarious agent responsible? I do this by considering the ways in which one may take responsibility for events caused by another agent or process. I discuss three senses of taking responsibility—accepting fault, assuming obligations, and fulfilling obligations—and the forms of vicarious responsibility that correspond to these. I end by explaining how to judge which sense applies in a given case, based on the degree of (what I call) moral entanglement between the agent and what they should take responsibility for.
CONCEPTUAL RESPONSIBILITY
February 2021. Online at Taylor & Francis.
Inquiry: An Interdisciplinary Journal of Philosophy 64(1–2): 20–45. Issue on 'Themes in Conceptual Engineering'.
Abstract: Conceptual engineering is concerned with the improvement of our concepts. The motivating thought behind many such projects is that some of our concepts are defective. But, if to use a defective concept is to do something wrong, and if to do something wrong one must be in control of what one is doing, there might be no defective concepts, since we typically are not in control of our concept use. To address this problem, this paper turns from appraising the concepts we use to appraising the people who use them. First, I outline several ways in which the use of a concept can violate moral standards. Second, I discuss three accounts of moral responsibility, which I call voluntarism, rationalism, and psychologism, arguing that each allows us to find at least some cases where we are responsible for using defective concepts. Third, I answer an objection that because most of our concepts are acquired through processes for which we are not responsible, our use of defective concepts is a matter of bad luck, and not something for which we are responsible after all. Finally, I conclude by discussing some of the ways we may hold people accountable for using defective concepts.
THE CONCEPT OF A UNIVERSITY: THEORY, PRACTICE, AND SOCIETY
May 2019. Online at Brill.
Danish Yearbook of Philosophy 52: 61–81. Special Issue on 'Revisiting the Idea of the University'.
Abstract: Current disputes over the nature and purpose of the university are rooted in a philosophical divide between theory and practice. Academics often defend the concept of a university devoted to purely theoretical activities. Politicians and wider society tend to argue that the university should take on more practical concerns. I critique two typical defenses of the theoretical concept—one historical and one based on the value of pure research—and show that neither the theoretical nor the practical concept of a university accommodates all the important goals expected of university research and teaching. Using the classical pragmatist argument against a sharp division between theory and practice, I show how we can move beyond the debate between the theoretical and practical concepts of a university, while maintaining a place for pure and applied research, liberal and vocational education, and social impact through both economic applications and criticism aimed at promoting social justice.
HARMS AND WRONGS IN EPISTEMIC PRACTICE
November 2018. Online at Cambridge Core.
Royal Institute of Philosophy Supplement 84, Cambridge University Press.
Edited book (co-editors: Simon Barker and Charlie Crerar), reviewed.
Contributors: Alison Bailey, Olivia Bailey, Heather Battaly, Simon Barker, Havi Carel, Quassim Cassam, Charlie Crerar, Miranda Fricker, Trystan S. Goetze, Heidi Grasswick, Keith Harris, Casey Rebecca Johnson, Ian James Kidd, Alessandra Tanesini.
From the Back Cover: How we engage in epistemic practice, including our methods of knowledge acquisition and transmission, the personal traits that help or hinder these activities, and the social institutions that facilitate or impede them, is of central importance to our lives as individuals and as participants in social and political activities. Traditionally, Anglophone epistemology has tended to neglect the various ways in which these practices go wrong, and the epistemic, moral, and political harms and wrongs that follow. In the past decade, however, there has been a turn towards the non-ideal in epistemology. This volume gathers new works by emerging and world-leading scholars on a significant cross section of themes in non-ideal epistemology. Articles focus on topics including intellectual vices, epistemic injustices, interpersonal epistemic practices, and applied epistemology. In addition to exploring the various ways in which epistemic practices go wrong at the level of both individual agents and social structures, the papers gathered herein discuss how these problems are related, and how they may be addressed.
HERMENEUTICAL DISSENT AND THE SPECIES OF HERMENEUTICAL INJUSTICE
Winter 2018. Online at Wiley Online Library.
Hypatia 33(1): 73–90.
Abstract: According to Miranda Fricker, a hermeneutical injustice occurs when there is a deficit in our shared tools of social interpretation (the collective hermeneutical resource), such that marginalized social groups are at a disadvantage in making sense of their distinctive and important experiences. Critics have claimed that Fricker's account ignores or precludes a phenomenon I call hermeneutical dissent, where marginalized groups have produced their own interpretive tools for making sense of those experiences. I clarify the nature of hermeneutical injustice to make room for hermeneutical dissent, clearing up the structure of the collective hermeneutical resource and the fundamental harm of hermeneutical injustice. I then provide a more nuanced account of the hermeneutical resources in play in instances of hermeneutical injustice, enabling six species of the injustice to be distinguished. Finally, I reflect on the corrective virtue of hermeneutical justice in light of hermeneutical dissent.
BOOK REVIEWS
-
Games: Agency as Art, by C. Thi Nguyen (Oxford University Press, 2020). In The Philosophical Quarterly (Advance Online Article). DOI: 10.1093/pq/pqab021
-
The Limits of Knowledge: Generating Feminist Pragmatist Cases for Situated Knowing, by Nancy Arden McHugh (SUNY Press, 2015). In Journal of Applied Philosophy 33 (2016): 344–46. DOI: 10.1111/japp.12156