The goal of this project is to try and help machine learning developers make better decisions about properties like fairness, justice and equity. Current technical approaches to “fair machine learning” do not yet address the fact that making a model fair requires a signficant amount of decision making on the part of the developer (e.g. which fairness definition to apply?). A developer who acts arbitrarily runs the risk of causing a great deal of harm. In this project, we both examine techniques for informing the decisions as well as tools for understanding the decisions after they have been made.
To get a better idea of the general shape of this project, a position paper I (along with Julia) presented at the FTC Workshop on Technology and Consumer Protection contains a description of aspects of this in greater detail.