Project cards: a tool for research transparency

ethics
Author

Alejandra Arciniegas

Published

2022-10-24

Abstract

This blogpost presents some early-stage, exploratory work on a kind of tool that may be useful for Topos (and other similar institutes!) as it matures as a company. In a nutshell, it is meant to serve as a framework to facilitate accessibility and discussion—internally and externally—of the various projects under development here.

This blogpost presents some early-stage, exploratory work on a kind of tool that may be useful for Topos (and other similar institutes!) as it matures as a company. In a nutshell, it is meant to serve as a framework to facilitate accessibility and discussion—internally and externally—of the various projects under development here.

At many tech companies and institutions, there is such a breadth of work taking place that it’s hard for any one person to have a holistic sense of it. To a certain extent this is the unavoidable consequence of specialization; however, this “siloed”, decentralized way of conducting research / developing technology can also be an obstacle to fruitful collaboration and cross-pollination of ideas between disciplines and sub-specialties. It can make it hard to understand the ‘big picture’, or the more general contexts in which specific tools/insights might be applicable.

At an institute like Topos, increasing transparency can, in addition, yield a number of other benefits, such as making the work more accessible (and thus more attractive) to non-specialist outsiders. Moreover, it can increase the capacity to trace foundational work through to its potential applications, and thus assess potential societal impacts and ethical implications.

Below we present an adaptation of the “model cards” tool, originally designed by Margaret Mitchell and collaborators for machine learning contexts (which face similar obstacles to the mutual comprehensibility of models). The notion of a “model” at play here is more appropriate to ML systems; for an institute like Topos, we instead shift our unit of analysis to “projects” and develop the notion of “project cards” by adapting and reworking the model card template.

The core idea is to have some “metadata” associated with each project that is expressed in easy-to-understand language that helps improve transparency, both within the institute and to outsiders (e.g., investors, grant agencies, or third-party potential collaborators). This is in line with the original purpose of model cards:

Model cards are just one approach to increasing transparency between developers, users, and stakeholders of machine learning models and systems. They are designed to be flexible in both scope and specificity in order to accommodate the wide variety of machine learning model types and potential use cases. Therefore the usefulness and accuracy of a model card relies on the integrity of the creator(s) of the card itself. It seems unlikely, at least in the near term, that model cards could be standardized or formalized to a degree needed to prevent misleading representations of model results (whether intended or unintended). It is therefore important to consider model cards as one transparency tool among many, which could include, for example, algorithmic auditing by third-parties (both quantitative and qualitative), “adversarial testing” by technical and non-technical analysts, and more inclusive user feedback mechanisms. (p.228, Mitchell et al. 2019)

Project cards are meant first and foremost to be attached to new projects and work-in-progress, so researchers can get a sense of which projects are active and what they are all about. Ideally, they should be updated at major milestones and at project completion. (Retroactively creating project cards for already-completed projects is a possible extension that can be considered on a case-by-case basis.)

1 Key Information for Project Cards

Project Parameters: basic information about the project.

  • Person, team, or organization developing the project
  • Inception date + basic timeline of development
  • Relevant papers and other resources connected to the project
  • Citation details
  • License/IP status, if applicable
  • Contact information: where to send questions or comments about the model

Project Description: what is it?

  • Short, narrative description of the nature of the project using simple, non-technical (as much as possible) language: “elevator pitch”.
  • Could even be multiple summaries pitched at different audiences: e.g., specialist, non-specialist mathematician, non-mathematician (but possible investor).
  • What high-level problem(s) is this work/technology trying to solve? (All technologies are created to solve a problem, what is this one trying to fix?) Why is this formalism/technology useful?
  • Which parts of this project consist of abstract math and which parts are specifically technologies that implement or use the math? (Conceptually, it might be useful to keep those two separate, i.e., mathematics vs. software.)

Intended Uses: applications envisioned during development.

  • Primary intended uses: do the people working on the project have specific applications/uses in mind? How general/specific are they? (This helps users gain insight into how robust the model may be to different kinds of inputs.). If a description of all intended uses is too hard, then provide concrete examples of intended use.
  • Primary intended users: is the project being developed to be used by a specific team or company? A specific industry, or field of study?
  • Out-of-scope use cases: this is a bit of brainstorming on the part of the project developers. Namely, what uses might this work be put to, outside the intended uses highlighted above? Is there other, existing work that is similar? If so, what uses has it been put to?
  • Intended non-use: where do the developers think the technology should not be used? (Developers usually have a good idea about when their system/model/tech is going to fail, whether because needed assumptions don’t hold, or the computational demands are too high, etc.)

Metrics: how is the “success” of the project measured?

  • Start date and (target) end date, if applicable. General chronology of the project (milestones, goals, deliverables). Record of previous metrics/goals and how they may have changed over time.
  • Post-project assessment: were the original goals of the project (as articulated by the developers, or perhaps the funders) actually achieved (this includes timelines, specific deliverables, etc.). Were the goals modified or adapted, and if so, how and why? Note that this requires having actually articulated measurable goals at the early stages of the
  • More generally, there are inward and outward-facing measures of “material success”, and a somewhat blurry line between them.
  • Inward: citation by other researchers, grant funding, attracting new students/researchers, etc. These standards of success are very similar to what counts as “success” in academia.
  • Outward: Industry pickup, patents, corporate investment—all reflecting real-world applicability. If the project is envisioned to make a tangible impact on the world (through industry, government, or broader social impacts), then its success is at least partly a function of the extent that it actually does so.

Data (as applicable): details of any data used in the course of the project.

  • Datasets: data that is being used, collected or generated via simulation (even synthetic data). What datasets were used to inform/evaluate the model? Are these publicly available?
  • Motivation: Why were these particular datasets chosen? What might change if they had been chosen differently?
  • Preprocessing (If applicable): How was the data preprocessed for evaluation?

Ethical Considerations

  • Developers are invited to brainstorm about any ethical considerations they think may be specifically relevant to their project, including the mechanism for why it might be relevant, and any thoughts about mitigation (elimination of ethical risk generally being impossible).
  • Since in many contexts the above will be hard to do without some experience/training, it will often be useful to provide some form of guidance to support this aspect of the project card. This could take many forms, including one-on-one consultations, broader presentations or “ethics workshops” available to researchers, or other static tools (e.g., an “ethics checklist/survey”, developed separately but integrable into the project card framework).

Caveats and Recommendations

  • This section collects additional concerns, recommendations, or important notes that weren’t included in previous sections.

While this suggested tool is yet to become standard practice at Topos, we hope this helps add transparency to our evolving thinking, and creates dialogue about prerequisites for informed analysis of the social purposes and ethical implications of scientific work. It’s not always easy to unpack these subtleties, especially given the abstract and technical nature of fundamental mathematical research!

References

Mitchell, Margaret, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. “Model Cards for Model Reporting.” In Proceedings of the Conference on Fairness, Accountability, and Transparency, 220–29.