I am a PhD candidate on accountable geo-intelligence (i.e., respsonsible geo-AI) at the University of Twente. I'm particularly interested in two issues for my PhD work: privacy and biases. I am fascinated by Sherlock Holmes' words to Dr. Watson that the individual person is a mystery but in the aggregate the person becomes a mathematical certainty: "You can never foretell what any one (person) will do, but you can say with precision what an average number will be up to. Individuals vary, but percentages remain constant. So says the statistician." And this is very much true and perhaps even more pronounced with AI. The consenquences of Sherlock's words are not only about privacy but also do apply for biases/fairness; what if an AI system relies on sensitive attributes (e.g., gender, race, religon e.t.c) to determine what one person will be upto and recommends decisions that cause harm to the person...?
Languages:
R · Python · Julia


