Strategies for ethically oriented pursuit of research and technology

ethics
Authors

Alejandra Arciniegas

Brendan Fong

Published

2022-08-24

Abstract

At Topos we’re motivated to build technologies that help us understand the complexity of the systems around us, and to cooperate to improve them, for everyone’s sake. But sometimes good intentions are not quite enough. For example, cooperation requires trust, and we want our technologies to help build trust within society. It’s now clear that technologies like social media can damage trust through misinformation and polarization. How might we anticipate and protect against such consequences in our own work?

At Topos we’re motivated to build technologies that help us understand the complexity of the systems around us, and to cooperate to improve them, for everyone’s sake. But sometimes good intentions are not quite enough. For example, cooperation requires trust, and we want our technologies to help build trust within society. It’s now clear that technologies like social media can damage trust through misinformation and polarization. How might we anticipate and protect against such consequences in our own work?

As Ezra Klein recently wrote in the New York Times, technologies have a say in who we become (Klein 2022). Thus, as technologists, we should ask “How do we want to be shaped? Who do we want to become?” How do we understand the link between the technologies that we build, and the values we wish to express?

These questions are by no means simple: the world is complex and hard to predict, especially when we’re at the frontier of new and foundational research. But we can make progress step-by-step, by considering how ethical considerations are woven into practical details of our daily research culture.

We’re very glad we’re not alone in this pursuit. Over the past few months we’ve been seeing what we can learn from strategies for ethically oriented technology research in other technology research communities. We’ve grouped some of the patterns into three commonly used governance strategies—checklists, core principles, and review boards—and one novel approach with the nature of research at Topos in mind—what we tentatively call parallel synthesis. Here we discuss their pros and cons, especially in relation to a small, fundamental research group like us at Topos.

While we’re still exploring what approach is the best fit for Topos, we hope that making our initial “literature review” transparent and available will help us in this search.

A view of the Berkeley skyline

1 Checklists

The checklist strategy provides a short, standard reference list of ethics-relevant questions for researchers to complete at various “milestone” points in the development of their work. Milestones could include, for example, inception, major publications, grant proposals, pitches to other units/companies, quarterly/annual reviews, and so on. The strategy can be broadened to a collection of related checklists, appropriate to projects of different natures.

Each question on the checklist is designed to prompt reflection about ethical dimensions of the project, helping researchers to not only keep such questions in mind, but also actively “report” on the status of their project with respect to them. The questions may be associated with “scores”, so that the final report could be summarized succinctly by a single numerical quantity.

In principle, projects whose score falls below a certain threshold might trigger further interventions, perhaps in the form of a more elaborate report, a committee meeting, or even implementation of a formal review process (see below).

1.1 Examples

  • The Seattle Children’s Hospital employs a data ethics checklist to help its team consider possible ethical issues arising in their use of data. Used as part of project management, the checklist is combined with ethics training for data professionals and overseen by a data ethics committee. (Montague et al. 2021)
  • The Fairness, Accountability, and Transparency in Machine Learning community derives a list of questions from their Principles for Accountable Algorithms. In this sense their approach is a mix of a checklist strategy with the next strategy: “core principles”. (Diakopoulos et al., n.d.)

1.2 Pros

  • Can be quickly applied.
  • Flexible enough to fit virtually any project.
  • Minimizes time spent by researchers on ethics-related considerations.
  • Useful for projects in very early stages with few (or no) foreseeable applications.
  • Can be used to determine when an external expert is needed (especially in combination with other tools).

1.3 Cons

  • Has no internal feedback mechanism; provides no concrete steps to address ethical issues.
  • Can imply ethical considerations are as straightforward as ‘checking’ off simple test questions.
  • Easy not to take seriously, for example, by intentionally (or unintentionally) misinterpreting questions or simply failing to report certain issues.

2 Core Principles

A core principles strategy involves clear articulation of a set of “core principles” for guiding the ethical research. These principles are intended to form the foundation of a more complete governance system, including (ideally) concrete mechanisms to incentivize/enforce them, and to identify and course-correct violations. As such, this is the most open-ended of the four strategies we present, since the specific form it takes depends heavily on how it is implemented.

It worth noting that these principles do not need to be purely negative, i.e., phrased in terms of what to avoid—they can also be goal-oriented, emphasizing the desired direction of research and implementation in an ethically-sound framework. Such principles often form the most visible and public-facing element of a company’s ethical practices.

2.1 Examples

  • Microsoft (“Putting principles into practice”) states core principles of fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. Each of these six principles is developed further by Microsoft, with an eye towards best practices for implementing and promoting them. (Microsoft Corporation 2020; Crampton 2022)
  • Salesforce (“From principles to practice”) aims to make technologies that are responsible, accountable, transparent, empowering, and inclusive. Salesforce has also developed an “Ethical AI Practice Maturity Model” to guide the development of AI research. (Baxter, n.d.)
  • The European Commission Panel (“Ethics by design”) states core principles of respect for human agency, privacy and data governance, fairness, individual, social and environmental well-being, transparency, accountability and oversight. These principles are recommended to be implemented using a “five-layer model” of increasing specificity. They report: “Ethics by Design is intended to prevent ethical issues from arising in the first place by addressing them during the development stage, rather than trying to fix them later in the process. This is achieved by proactively using the principles as system requirements.” (Dainow and Brey 2021)

2.2 Pros

  • Provides transparent, written structure for putting ethical concerns at the center of company culture.
  • General, overarching principles provide a way to uniformize a company’s approach to ethical research, helping to avoid ad-hoc measures and inconsistent messaging.
  • Creates public awareness about the central role of ethics in technology.

2.3 Cons

  • In the absence of strong governance practices (i.e., specific guidelines for how to promote and implement the principles), this strategy can function as “ethics washing”—giving the appearance of an ethically-minded company with no real bite.
  • The open-ended nature of how to implement this strategy means it can typically be only one part of a multi-pronged approach.
  • General principles may sound good in theory, but in practice can serve to limit or undercut the value or integrity of research in unforeseen ways, or even be counterproductive from an ethical standpoint. This is a particularly salient concern for projects that are farther removed from concrete applications, such as many of Topos’ research projects. For example, “privacy” seems like a noble value, but a blanket policy or ideal in this regard could end up supporting criminal activity in concrete applications.

3 Formalized Ethical Review Process

This is a formal, iterative review process similar in spirit to the Institutional Review Boards (IRB) reviews (required for all experimentation on human subjects) but differing in scope and orientation. Like IRB approval, the idea is that a successful “ethics review” (evaluated by an ethics review board) would be a prerequisite for funding or even beginning a new research project. This process is iterative and (ideally) collaborative: the review board is not intended to be adversarial, but rather to work together with the researchers to re-tune project proposals and identify concrete mitigation strategies for the potential ethical risks/issues that a given project might entail before the project even begins.

It is important to note that the scope of this sort of review is substantially different from typical IRB assessments, which focus on the impacts to the particular subjects of the experiment; by contrast, in the case of the ethics review process, there will often be no proximate “subjects”—impacts must be understood much more broadly, in terms of the effects on subsequent research, technology, and the implications for society at large.

3.1 Example

  • Stanford Ethics and Society Review (ESR) is a formal review process designed to facilitate ethical and societal reflection as a requirement to access funding. (Bernstein et al. 2021b, 2021a)

3.2 Pros

  • Can provide step-by-step guidelines and clearly articulated interventions for specific projects.
  • Directly involves researchers in thinking about impact assessments.
  • Ethics reflection is presented as a precondition to research, supporting a culture of ethical thinking that is integrated into the research process.
  • Iterative nature enables a conversation between researchers and relevant experts.
  • Difficult to “game” the system.

3.3 Cons

  • Requires concrete projects with clearly delineated scopes.
  • Resource-intensive in relying on a review committee (though this could be mitigated through industry collaboration).
  • Can be time-consuming: may slow down or delay projects.

4 Parallel Synthesis

A synthesis strategy creates a dedicated role for examining and “synthesizing” the research work of the organization, to explicate its impacts and shed light on potential ethical implications.

The basic motivation comes from the following observation: the pipeline leading from work in theoretical mathematics to societal applications and impacts is long and difficult to understand, let alone predict. This makes the kind of strategies employed by tech companies somewhat awkward fits for an institute like Topos, whose focus is primarily on the earliest part of this pipeline. What is needed in this case is expertise in the pipeline process itself—i.e., someone whose job it is to understand both the mathematical foundations and the possible applications well enough to assess them realistically. Typical academic work prioritizes innovation over this sort of synthesis or translational work, since the latter doesn’t tend to produce anything “ground-breaking”: synthesis work is instead focused on organizing, streamlining, and clarifying existing research.

To adequately assess the downstream impacts of foundations-type research, what is needed is active synthesis—namely, expertise in precisely the process by which mathematical theory is translated over time into tangible impacts in more applied areas, and eventually to the world—performed at the same time as the research process itself (whence, parallel). This allows insights about the potential uses of a technology to inform and change the fundamental research itself, before it is structurally reified into a technological artifact or attracts a community of users.

Supporting this kind of expertise (e.g., through hiring/promotion practices) is a step outside of traditional academic priorities, but arguably constitutes an important and even perhaps a necessary bridge to connect theory to application in the modern era.

At Topos, this “synthesis” person would likely have a background in math, to have the necessary training and experience to understand the details of the research work. Instead of focusing on proving new theorems, developing new generalizations, and writing software, however, they would be focused on understanding the way that this mathematical work is picked up and used down the pipeline. This means they need not be an ethicist, but they would at least have the tools and general knowledge to connect with ethical issues.

Ultimately, the idea is that the wider viewpoint offered by such a “synthesis” person would uniquely position them to have relevant things to contribute to discussions about ethical impacts, not because they are also ethicists, but because they have a better understanding of how these mathematical tools can impact the world. In a sense, this sort of expertise is like a prerequisite for talking about ethics at all, since without it there is no clear way to connect the research being done with impacts on society.

4.1 Example

There is no direct example of this since it is a new proposal, but the 1990 “Boyer report” Scholarship Reconsidered (Boyer 1990) outlines the academic case (relating to both research and education) for “synthesis” work of the sort referenced above, arguing for the importance of what they term “the scholarship of integration”.

Expanding on the above summary, a “synthesis” person might perform tasks like the following:

  • Dialogue with experts at both “ends” of the pipeline: so, in the case of Topos, this would be category theorists and also relevant experts in fields that make use (directly or indirectly) of the mathematical foundations that Topos researches.
  • Study the history of applications of relevant theoretical work and use this to assist others in various tasks, like the writing of grant applications (which often require discussion of tangible applications with relevant precedents).
  • Advise and work in tandem with researchers who are looking to assess or articulate the ethical impacts/implications of their work, which requires an understanding of the pipeline.
  • Publish papers and/or run workshops/classes that present summaries of relevant connections between theory and application, so that those who are focused on “innovation” can also gain some understanding of the broader picture.

4.2 Pros

  • Directly responsive to the challenge of connecting theory work to societal applications and impacts.
  • Can clarify and sharpen project proposals in regard to their applications in ways that would support more successful research and grant proposals.
  • Creates a space for expertise in a process that is central to the mission of institutes like Topos but has rarely been studied/supported directly.
  • Can help clarify when it is relevant to bring in expertise from ethicists or other fields.

4.3 Cons

  • Opportunity cost: someone you hire to work in synthesis is someone you’re not hiring to work on innovation.
  • Untested: there are no clear examples of this to follow or to count as indicators of likely success.

We’re still exploring the various ways we might put into practice these processes for self- and group-reflection about the ethics of our work at Topos. But we’re guided by their importance. As Langdon Winner writes in his seminal paper “Do artifacts have politics?” (Winner 1980):

The things we call “technologies” are ways of building order in our world. Many technical devices and systems important in everyday life contain possibilities for many different ways of ordering human activity. Consciously or not, deliberately or inadvertently, societies choose structures for technologies that influence how people are going to work, communicate, travel, consume, and so forth over a very long time. …

By far the greatest latitude of choice exists the very first time a particular instrument, system, or technique is introduced. Because choices tend to become strongly fixed in material equipment, economic investment, and social habit, the original flexibility vanishes for all practical purposes once the initial commitments are made.

In helping to construct systems technologies we have the privilege and opportunity of this great latitude of choice, and we hope we can, in conjunction with wider societal voices, make these choices deliberately to the benefit of society as a whole.

References

Baxter, K. n.d. “AI Ethics Maturity Model.” https://www.salesforceairesearch.com/static/ethics/EthicalAIMaturityModel.pdf.
Bernstein, MS, M Levi, D Magnus, B Rajala, D Satz, and C Waeiss. 2021a. “ESR: Ethics and Society Review of Artificial Intelligence Research.” https://arxiv.org/abs/2106.11521.
———. 2021b. “Ethics and Society Review: Ethics Reflection as a Precondition to Research Funding.” PNAS 118. https://doi.org/10.1073/pnas.2117261118.
Boyer, EL. 1990. Scholarship Reconsidered: Priorities of the Professoriate. Carnegie Foundation for the Advancement of Teaching.
Crampton, N. 2022. “Microsoft’s Framework for Building AI Systems Responsibly.” 2022. https://blogs.microsoft.com/on-the-issues/2022/06/21/microsofts-framework-for-building-ai-systems-responsibly/.
Dainow, B, and P Brey. 2021. “Ethics by Design and Ethics of Use Approaches for Artificial Intelligence.” https://ec.europa.eu/info/funding-tenders/opportunities/docs/2021-2027/horizon/guidance/ethics-by-design-and-ethics-of-use-approaches-for-artificial-intelligence_he_en.pdf.
Diakopoulos, N, S Friedler, M Arenas, S Barocas, M Hay, B Howe, HV Jagadish, et al. n.d. “Principles for Accountable Algorithms and a Social Impact Statement for Algorithms.” https://www.fatml.org/resources/principles-for-accountable-algorithms.
Klein, E. 2022. “I Didn’t Want It to Be True, but the Medium Really Is the Message.” 2022. https://www.nytimes.com/2022/08/07/opinion/media-message-twitter-instagram.html.
Microsoft Corporation. 2020. “Putting Principles into Practice: How We Approach Responsible AI at Microsoft.” https://www.microsoft.com/cms/api/am/binary/RE4pKH5.
Montague, E, TE Day, D Barry, M Brumm, A McAdie, AB Cooper, J Wignall, et al. 2021. “The Case for Information Fiduciaries: The Implementation of a Data Ethics Checklist at Seattle Children’s Hospital.” J Am Med Inform Assoc 28. https://doi.org/10.1093/jamia/ocaa307.
Winner, L. 1980. “Do Artifacts Have Politics?” Daedalus 109. https://www.jstor.org/stable/20024652.