DATA Production Image

"TO BE HUMAN IS TO CHANGE"

An Interview with Data Ethicists Paul Wolpe and Nassim Parvin

Dr. Paul Wolpe is the Director of the Center for Ethics at Emory University, and Dr. Nassim Parvin is Associate Professor in the School of Literature, Media, and Communication at the Georgia Institute of Technology. We talked to them about their impressions of DATA by Matthew Libby, the danger of predictive algorithms taking away human agency, and the need for storytelling to illuminate more human-technology relationships.

 

ALLIANCE: What resonated for you most in the play DATA?

Paul Wolpe Maneesh, the title character, is trying to be responsible about the ethics of his position as an entry-level employee at a large tech company. He has this algorithm he's written as a student that can be used by the company for much more complex and, in his view, unethical purposes. He's got a moral dilemma. It’s the equivalent of Elizabeth Kubler Ross’s five stages of grief. Maneesh goes through the five stages of making an ethical decision: he tries to abdicate responsibility, he resists, he capitulates, he tries to walk away and let them do what they want. It’s a set of emotional dynamics that real workers go through when they confront something that their company does which they find to be problematic.

Nassim Parvin My background is in electrical engineering, and early in my career, I found myself connecting to cryptography. One of the main applications of cryptography is at the time of war, when you want to send secret messages. I had my own moral dilemma when it became clear that war cryptography was not what I wanted to do. At the time, I also had a chance to transition from my studies back home in Iran to my studies in the United States, and it led to a PhD in design and ethics. In the play, I relate most to the character of Riley and her position as an outsider. She has to work with “techbros” and pretend that the toxic culture that she has to endure is OK. Her insights and concerns are easily dismissed and disposed of because she is a woman. 

 

ALLIANCE: Many of us have little or no idea how predictive algorithms are assisting in important decision-making from determining who should receive a visa to who is more likely to commit a crime. Even some of Maneesh’s co-workers in DATA find the algorithms produced by the company beyond their comprehension. Why is there such an aura of complexity around algorithms? Should we be concerned about this?

Paul Wolpe Algorithms can be many, many, many thousands of lines long. There's a lot of internal decisions that are made that often make it very difficult to know why a particular output has been created by that algorithm. What characterizes artificial intelligence and machine learning is that it learns and modifies itself and to understand how it reaches a certain conclusion, in many cases, is very difficult. This is called the problem of transparency or explainability. And the other side of the question is that algorithms can be proprietary. People create algorithms for their business, and they don't want to explain exactly how they work because it is proprietary information. Both of these make the situation complicated.

Nassim Parvin Another point is that we are throwing all of this money into designing predictive algorithms that condemn people to their past. Whereas what it is to be human is that we can change. We want to find, and fund, situations where somebody actually gets out of jail, gets back on their feet, and remakes their life. Instead, we are basically saying: If I purchased the pink dress yesterday, I will probably want another pink dress tomorrow. And if I committed a crime yesterday, I will probably do the same. It's a very grim view of what it is to be human.

We are throwing all of this money into designing predictive algorithms that condemn people to their past. It's a very grim view of what it is to be human.

Paul Wolpe Agreed. The thing that concerns me the most about algorithms is the removal of human agency although we are not there yet.

 

ALLIANCE: What do you see as most promising about algorithms?

Nassim Parvin The biggest promise is in the interaction of human agency and machine abilities. For example, in pathology we now have machines that can process images of lungs that can detect traces of cancer. That’s something the human eye can’t see. We can use these instruments to help us do things that we couldn't do. We need to ask: What are the things that machines can do? And where are the places where we can intervene to make sure that we are making the best decisions in partnership with machines?

 

ALLIANCE: You work with some of the nation’s most talented students at Georgia Tech. What do you see as the biggest challenge your students face in the transition from work in academia to work as employees in tech companies?

Nassim Parvin Our students want to make a positive change, but they are often ill equipped to deal with ethical issues because they see ethics as their personal moral responsibility. They think that as long as they are committed to doing the right thing they will be able to do it. And that doing the right thing is obvious. Whereas when they go out in the world, they can't always make all the decisions. And even when they can, it's not always clear what problem that they are trying to address and how they might address it. I often go back to the failure of our educational institutions when we teach subject matter expertise to students but fail to accompany it with ways of thinking about ethical issues. 

 

ALLIANCE: What role, if any, can DATA and other representational practices play in service of data ethics?

Nassim Parvin Storytelling is one of the main ways that we learn about ethics. Stories are grounded in the concrete nature of situations that call for our ethical decision making and allow us to put ourselves in the place of these characters. That's really important. And then we can separate ourselves from the situation and think about what we might do as a response. It’s the dialectic of ethical theory and ethical practice present in storytelling that is the key to great theater and great ethics.

Paul Wolpe I read a lot of science fiction as a kid. And the questions in DATA are questions that in one way or another have come up over and over again in the history of people and their relation to technology. We continue to refine them to find out where our anticipations were wrong and where they were right. And we are rarely right when it comes to predicting what technology will be like in the future. Art has to constantly reinvent itself and raise these questions in light of the current reality. What I love about this particular play is how spot on it is with what's happening right now, and tomorrow, with these kinds of technologies.

 

 

Additional talkbacks will be held on May 13th and 20th. For tickets and info, visit alliancetheatre.org/data.

 

 

Nassim Parvin is an Associate Professor at the School of Literature, Media, and Communication at Georgia Tech. Dr. Parvin’s interdisciplinary scholarship has appeared in design, computing, and STS venues; and her designs have received multiple awards and been exhibited in venues such as the Smithsonian Museum. She is on the editorial board of Design Issues and serves as a Lead Editorial Team member of Catalyst: Feminism, Theory, Technoscience.

 

 

Paul Root Wolpe, Ph.D. is the Raymond F. Schinazi Distinguished Research Chair of Jewish Bioethics, Professor of Medicine, Pediatrics, Psychiatry, and Sociology, and the Director of the Center for Ethics at Emory University. He is Editor-in-Chief of the American Journal of Bioethics Neuroscience. For 15 years he served as the Senior Bioethicist at the National Aeronautics and Space Administration (NASA). Dr. Wolpe is the winner of the 2011 World Technology Network Award in Ethics and was named one of Trust Across America’s Top 100 Thought Leaders in Trustworthy Business Behavior. 

 

 

Meet Our Generous Sponsors