The end of privacy thesis by Kosinski states that we are living in an age of erosion of privacy. Privacy can be defined as the control of information about oneself. Dr. Kosinski's argument was that it takes such a small amount of information to completely identify a person, that most people are already completely identified. For example, an algorithm can predict your personality more accurately than your spouse, based on 250 Facebook likes! He urges us to acknowledge that we are living in a post-privacy world, and figure out how to best live in such a world by organizing society and cultures so that we are all safe. We should be more aware of when and how to encrypt data to create more safe spaces for privacy. As mentioned in class, we all have a right to privacy. Privacy has moral value because it shields us by providing certain freedom and independence- freedom from scrutiny, prejudice, pressure to conform, exploitation, and the judgement of others. There are still some cultures and countries where homosexuality is illegal and may result in death. However, Kosinski is optimistic that AI has huge beneficial impacts, and that it is actually helping humans make more accurate and less biased decisions. His example was that using credit scores for hiring purposes actually reduced the racist biases that the employers usually rely on to make hiring decisions. Another exciting application of AI mentioned in our other pre-class video is that researchers were able to predict if a person has a medical illness such as liver cancer or Alzheimer’s or Parkinson’s disease using bing search history and mouse pad movement patterns and click rates. It is astounding how accurate an algorithm is at predicting the personality traits of people. It’s crazy how easy this is, let alone that it can be done! He has very clear examples of how we are living in a post-privacy world. He urges us to not worry about whether we are giving away our data, as many big companies already have this data, and can transfer it easily across borders and almost invisibly, and is already omni-present in our environment in the form of distributed AI, and thus already out of our control. He seems to think that this is not such a big problem, and is most likely more beneficial than harmful. Instead, he urges us to think about how we should regulate the ethical use of all of this data, in order to avoid the misuse of it, while still allowing the data to help us make exponential discoveries. I agree with this, and think that while yes my gut reaction is uneasy about such a powerful technology that may be hard to control or understand, since it is helping improve society so much, we should continue using it. Frederike Kaltheuner from Privacy International had a good point that data privacy should be understood from a cultural perspective, and thus can have different implications based on where this data is used. I liked how she noted that an algorithm will always have some bias, and adding complexity can remove some of these biases. However, by adding complexity sometimes we lose interpretability. I think the main point of concern is how powerful AI initiatives may start out with moral earnest but may accidentally transform into something discriminatory or hate-mongering. The example Kosinki used was how Facebook was made to give users a pleasant networking experience, and by optimizing an algorithm for making people stay on the site longer, a negative byproduct of showing users controversial and racist material arose. The other panelists pointed out that we should be very cautious of the limitations and dangers of making inferences with AI, as there are sensitive cultural differences in how one’s information could be used. Humans are also biased, and it may be equally black boxes compared to AI in terms of decision justification. AI may in fact be less biased in some cases, and thus should not be ignored for decision-making purposes. Kosinski argues that algorithms are easier to study than humans, and thus moving forward we should be able to improve our understanding and ethical use of algorithms more easily than we will be able to understand humans. I agree, as I have a background in neuroscience and know that there is so much we have yet to understand about the human brain. There are many things to consider after accepting our post-privacy state, such as: Technology is moving faster than we can think about or understand it, so how are we supposed to keep up with its regulation? Is the benefit of using something we don’t understand worth the unknown risk? What if an algorithm is not 100% accurate? If making wrong decisions are dangerous to get wrong, should we rely on the AI? These are big questions that deserve careful consideration and open discussion.