“Gayface AI” & the challenges of tracking downstream consequences of AI

Many of the tools, concepts and infrastructures that we have built for understanding the ethics of science and engineering do not readily apply to data technologies, such as machine learning and AI. A common misperception among data scientists is that if an IRB has signed off on a research study then it is “ethical” to conduct the study. However, IRB’s are bound by specific laws that circumscribe the issues they are allowed to examine, meaning that many of the legitimate concerns about data science and technologies go untracked and unregulated.

I recently published an article on the PERVADE Medium channel about how this issue played out in a controversial study that claimed computer vision technologies powered by neural networks could predict from facial profiles whether a person is homosexual or heterosexual.

Medium: ““The study has been approved by the IRB”: Gayface AI, research hype and the pervasive data ethics gap

  • “IRBs are specifically mandated to avoid even considering the types of harms this research poses, which is downstream consequences to groups of people or society overall. Pervasive data of the type they draw upon is distinct from the type historically familiar to IRBs. Machine learning tools are designed to leverage general knowledge about patterns in a population in order to have an effect on individuals at a later point. This is an inverse of the traditional pattern of potential risks and benefits in human subjects research, wherein studying individuals leads to potential effects on populations. Machine learning can be weaponized in ways that traditional psychological or sociological research simply cannot.”
Scroll to top