Press & Publications

Viewing posts from the Press & Publications category

Avanade’s TechSummit 2016 panel on digital ethics

I recently had the honor of participating on a panel at Avanade’s annual TechSummit conference. Organized by Steven Tiell of Accenture’s TechVision team, we were tasked with discussing the role of digital ethics and digital trust in enterprise. I joined Steven on stage with Bill Hoffman, Associate Director of the World Economic Forum and Scott David, Director of Policy at the University of Washington Center for Information Assurance and Cybersecurity. Below are my prepared remarks, which of course differ extensively from what I actually got around to saying on stage.

1. We’ve seen ethics requirements for medical and academic research, particularly when federal dollars are at play. Why should businesses care about ethics in their research?

Businesses should care about ethics most of all because it is, by definition, the right thing to do. But to go beyond a pat answer, I think it is useful to define the domain of “ethics.” I think of ethics as the methods and tools you use to make a consequential decision when there is relatively little settled guidance about the right thing to do. If you knew the right thing to do, then it would probably be a matter for compliance or legal departments. I like how digital sociologist Annette Markham recently put it when discussing a major data research scandal: “ethics is about making choices at critical juncture,” particularly when those choices affect other people. What I would add to Annette’s definition is that ethics is not just the decisions, but also all the work you have to do in advance to enable those critical decisions. You need the capacity to identify and evaluate those critical junctures, and to then make efficient, consistent and actionable decisions. Done well, ethics is a future-oriented stance. In my opinion, building the habits and infrastructures that make it possible for business to make good choices at critical junctions is simply something that will be good for the bottom line in the long run. It will certainly enable businesses to identify and mitigate risks more effectively.

When it comes to the matter of research ethics in particular, there are three aspects that bear more scrutiny when considering how and why enterprises should engage in ethics review practices.

First, because businesses now hold more data about human behavior than any other entity in human history, the value of those businesses is increasingly indexed to what they can do with that data now and in the future. Thus, the types of research being done looks like the types of research that have traditionally been located in university settings. It should indicate something important to us that academic researchers and institutions have invested so much in handling research ethics: research practices carry significant risk and require sustained attention.

Second, anyone can now be a researcher and everyone is a research subject. Yet all of our familiar ethics norms and infrastructures make certain outdated assumptions about institutional boundaries that create formal and informal professional limits on who can do consequential research. But those assumptions do not hold when human data research happens everywhere. Without the familiar institutional boundaries, businesses will need to make up the slack somehow.

Third, big data research methods simply do pose new kinds of risks for enterprise. Holding so much private data and using that data to intervene in people’s’ lives in a tailored, personalized fashion, poses risks beyond simply privacy. Research is often perceived as creepy or controlling, where even products that do the same thing might not. Thus it is important to align design practices, product development and ethics review in a manner that users of your services or providers of your data can be comfortable with.

Read More

Keynote presentation at Cambridge data ethics workshop

On 10 June, 2016, I will be giving a keynote talk at the Data Ethics Workshop, hosted by the Center for Research in the Arts, Social Sciences and Humanities at Cambridge University in the UK. I look forward to meeting some of the great thinkers in this field from the other side of the pond, and learning more about the different data ethics landscape in the EU.

Speaker: Jake Metcalf
Institution: Data and Society Institute and Founding Partner, Ethical Resolve
Title: Data subjectivity: responding to emerging forms of research and research subjects

Abstract: There are significant disjunctions between the established norms and practices of human- subjects research protections and the emerging research methods and infrastructures at the heart of data science and the internet economy. For example, long-standing research ethics regulations typically exempt from further review research projects that utilize pre-existing and/or public datasets, such as most data science research. This was once a sound assumption because such research does not require additional intervention into a person’s life or body, and the ‘publicness’ of the data meant all informational or privacy harms had already occurred. However, because big data enables datasets to be (at least in theory) widely networked, continually updated, infinitely repurposable and indefinitely stored, this assumption is no longer sound—big data allows potential harms to become networked, distributed and temporally stretched such that potential harms can take place far outside of the parameters of the research. Familiar protections for research subjects need rethinking in light of these changes to scientific practices. In this talk I will discuss how a historicization of ‘human subjects’ in research enables us to critically interrogate an emerging form of research subjectivity in response to the changing conditions of data-driven research. I will ask how data scientists, practitioners, policy-makers and ethicists might account for the emerging interests and concerns of ‘data subjects,’ particularly in light of proposed changes to research ethics regulations in the U.S.

New academic paper on human-subjects research and data ethics

Ethical Resolve’s Jake Metcalf has a new article co-authored with Kate Crawford about the strained relationship between familiar norms and policies of research ethics and the research methods of data analytics. It is available in pre-publication form on the open access Social Science Research Network, and will soon be available at the peer-reviewed journal Big Data & Society.

Where are Human Subjects in Big Data Research? The Emerging Ethics Divide

Jacob Metcalf
Data & Society Research Institute

Kate Crawford
Microsoft Research; MIT Center for Civic Media; NYU Information Law Institute

May 14, 2016

Big Data and Society, Spring 2016

Abstract:
There are growing discontinuities between the research practices of data science and established tools of research ethics regulation. Some of the core commitments of existing research ethics regulations, such as the distinction between research and practice, cannot be cleanly exported from biomedical research to data science research. These discontinuities have led some data science practitioners and researchers to move toward rejecting ethics regulations outright. These shifts occur at the same time as a proposal for major revisions to the Common Rule — the primary regulation governing human-subjects research in the U.S. — is under consideration for the first time in decades. We contextualize these revisions in long-running complaints about regulation of social science research, and argue data science should be understood as continuous with social sciences in this regard. The proposed regulations are more flexible and scalable to the methods of non-biomedical research, but they problematically exclude many data science methods from human-subjects regulation, particularly uses of public datasets. The ethical frameworks for big data research are highly contested and in flux, and the potential harms of data science research are unpredictable. We examine several contentious cases of research harms in data science, including the 2014 Facebook emotional contagion study and the 2016 use of geographical data techniques to identify the pseudonymous artist Banksy. To address disputes about human-subjects research ethics in data science, critical data studies should offer a historically nuanced theory of “data subjectivity” responsive to the epistemic methods, harms and benefits of data science and commerce.

Keywords: data ethics, human subjects, common rule, critical data studies, data subjects, big data

Skynet starts with data mining: thinking through the ethics of AI

I was recently interviewed by John C. Havens at Mashable about the creation of data ethics and AI ethics committees.

The ethics of artificial intelligence is becoming a much more concrete public discussion, particularly with the recent open letter advocating for a ban on autonomous weapons systems. The letter, organized by the Future of Life Institute and signed by 10,000 plus people, including many AI researchers and prominent tech leaders, advocates for an international ban on autonomous weapons systems that can operate without meaningful human input.

This follows on the heels of some major media attention earlier in the year about Bill Gates, Elon Musk and Steven Hawking arguing that artificial super-intelligence poses a future existential threat to humanity (including all signing another FLI open letter). Hawking told the BBC that, “The development of full artificial intelligence could spell the end of the human race.” There are reasons to be skeptical of some of this fear, not least of which is the definitional problem of actually getting a handle on what counts as AI and whether it would ever have generalized, incredibly plastic intelligence like human bio-brains do or the ability to maintain machine bodies without humans. (My favorite semi-serious reason for doubting is Baratunde Thurston’s point that if AI looked like human intelligence in aggregate it would spend all day taking cat pictures and trying to sell the rest of us stuff.)

Read More