Skip to main content

SoBigData Articles

Privacy Preserving Explanations in Recommender Systems: A TNA experience

By Gaurav Pandey, Faculty of Information Technology, University of Jyväskylä, Finland

Host: Avishek Anand, Assistant Professor, L3S Research Center, Hannover, Germany

I was fortunate to have a short research visit to L3S Research Center in Hannover, Germany. The visit focused on the initial ideas related to "Privacy Preserving Explanations in Recommender Systems". Explainability of recommender systems has gained much attention during the last few years, where we see many efforts that aim to explain to a user why she has been recommended an item. However, to the best of our  knowledge there are no existing studies that aim to check whether explanations can become a source of user privacy breaches.

The user to user privacy breach can occur because of a recommendation explanation, when certain preferences (e.g. likes, views, ratings, etc.) or personal attributes (e.g. location, organization, occupation, etc.) of a user is revealed to other users. Therefore, the study focused on identifying the cases where the explanations could potentially lead to privacy breach.

Moreover, the privacy breach could be direct, derived or in terms of level of certainty. A direct breach is the simplest, where the private information of a user is directly revealed to the other users; for example: “You are recommended this movie because user A has given 5 stars to it”. Here, the privacy of user A is breached. Alternatively, there could be an indirect breach where, the preferences of the users can be reverse-engineered or derived; for example: “Two of your friends living in City X like movie A”. Now, if the user has only two friends in city X, then their preferences can be derived. Moreover, there could be cases where the private information is revealed not with full certainty however with a high level. Consider the example: “You are recommended Hospital X because more than 90% of your online friends receive cancer treatment there”. Now, if the user has only 10 online friends, their sensitive information is revealed with a high level of certainty.

It often varies from user to user, what information they consider private. Therefore, we propose that the recommender systems should explicitly ask the user permission regarding what particular information (preferences and attributes) can be revealed to other users. Moreover, future directions include creation of methods that can detect explanations that lead to (direct or derived) privacy breach.

In sum, the research visit helped me to look into a relatively new area of research that I would like to explore in further detail. Moreover, it was a great opportunity for networking and to form future collaborations.