Recommender systems have been gaining more and more attention lately. The number of online platforms that support the information needs and product search of their users by providing personalized suggestions have been increasing rapidly. However, there is also more and more discussion about role and impact of these systems, e.g., related to filter bubbles, echo chambers, fake news or micro-targeting. For some of these issues, personalization and recommender systems are held responsible.
Users are becoming increasingly sensitive, and are demanding systems that mitigate bias and that are not just designed to increase the number of interactions. At the same time, users often perceive the suggestions that are delivered by recommender systems as too simple and obvious. Especially in domains related to media, leisure and lifestyle, emotional aspects play a decisive role and decisions are not taken on rational criteria only. State-of-the-art approaches often fail to take these aspects into account. In these domains, new perspectives and more comprehensive user models that also consider implicit preferences are required.
In this CD Laboratory we aim to address these issues. We will develop approaches that better adapt to the domain as well as to preferences and needs of different users or groups of users. To this end, beyond-accuracy measures including novelty, diversity and serendipity will be considered. To facilitate this, a multi-level user model will be introduced that captures the users with respect to three different levels, i.e., the individual level, the group level and the network level. To develop these models, multi-facet feature sets will be extracted and will be used to learn a representation of the users as well as of the items in a joint user- item vector-space. Based on these embeddings, clustering will be performed. Furthermore, social relations will be used to complement the setting.
This perspective leads to a more persistent user model that helps to study beyond-accuracy measures more systematically for different domains. Furthermore, we will capture the dynamics of the different beyond-accuracy measures and their relationships over time and also assess the long-term effects of two types of bias. The proposed research relates, moreover, to the broader field of Digital Humanism, where striving for beyond accuracy objectives and fairness in the context of software programs and algorithm represent central concerns.