Home  >   Blog  >   User-Centricity Matters: My Reading List from RecSys 2021

2021-10-05

User-Centricity Matters: My Reading List from RecSys 2021

  Support by donation   Gift a cup of coffee


This article is part of the series: Productizing Data with People

Conference season is here, and RecSys is back. I've been watching the evolution of recommender systems in the last few years, along with my physical attendances at RecSys 2016 and 2018. It's great to see that the research community comes back to a physical conference unlike 2020.

After taking a quick look at the list of accepted papers, for me, one of the biggest trends in 2021 is user-centricity, which focuses on how to allow users to intervene in a recommendation process while minimizing the risk of biases and maximizing diversity & fairness of recommendations. In that sense, a list below highlights the papers that attract me the most:

Of course, this observation is "biased" by my current personal interestEthical challenges in recommender systemsbut it's certainly an emerging area for the community as the conference has a dedicated session for "Echo Chambers and Filter Bubbles", "Users in Focus", and "Privacy, Fairness, Bias".

I started seeing such a tendency in the recent couple of years, and it's a good indication that the researchers are trying to go beyond the simple accuracy metrics. It reminds me a podcast episode that Joseph Konstan, one of the legendary professors in the field of recommender systems, emphasized how important defining the right metrics is. That is, quantifying the outcome of recommendation is a big challenge in this domain.

When I published "Understanding Research Trends in Recommender Systems from Word Cloud" in 2017, this type of trend wasn't clear enough, and the papers discussed mainly about "problem" (e.g., review, ranking, product, online) as opposed to "end users" or "eventual impact". But in 2020 and 2021, although many studies are still talking a lot about algorithms and accuracy improvements, the terms "bias", "metric", and "behavior", which complements the lack of foundational consideration and makes the recommenders more user-centered, have clearly arisen.

2020:

recsys-2020-wordcloud

2021:

recsys-2021-wordcloud

Notice that several papers are aggressively using the word "interaction" in the context of reinforcement learning. It basically enables users/developers to provide explicit feedback (even more explicit than rating feedback) so that humans can navigate a recommender to the right direction.

In other words, bi-directional interactions between humans and systems play an important role to bring the machine-generated recommendations to a higher level. I personally believe it's time to stop building a selfish intelligent system that naively aggregates and analyzes original data points with no care.

Last but not least, I'm looking forward to diving deep into the industrial challenges. As always, RecSys shows a unique balance between theory and practice both from academia and industry. They commonly pose insightful, motivating problem statements that give us an opportunity to rethink about how recommenders should be:

Another interesting read would be "Amazon at RecSys: Evaluation, bias, and algorithms". As the professor highlighted the trends as "evaluation" and "bias" similarly to what I observed, the field finally started focusing more on the essential problems every recommender will can face.

This article is part of the series: Productizing Data with People

  Share


  Categories

Recommender Systems

  See also

2022-10-14
Ethics in Recommendation Pipeline—A First Look at RecSys 2022 Papers
2021-11-24
How Can Recommender Systems Contribute to Mitigate Echo Chambers and Filter Bubbles?
2017-11-11
Understanding Research Trends in Recommender Systems from Word Cloud

  More

Last updated: 2022-09-02

  Author: Takuya Kitazawa

Takuya Kitazawa is a freelance software developer, previously working at a Big Tech and Silicon Valley-based start-up company where he wore multiple hats as a full-stack software developer, machine learning engineer, data scientist, and product manager. At the intersection of technological and social aspects of data-driven applications, he is passionate about promoting the ethical use of information technologies through his mentoring, business consultation, and public engagement activities. See CV for more information, or contact at [email protected].

  Support by donation   Gift a cup of coffee

  Disclaimer

  • Opinions are my own and do not represent the views of organizations I am/was belonging to.
  • I am doing my best to ensure the accuracy and fair use of the information. However, there might be some errors, outdated information, or biased subjective statements because the main purpose of this blog is to jot down my personal thoughts as soon as possible before conducting an extensive investigation. Visitors understand the limitations and rely on any information at their own risk.
  • That said, if there is any issue with the content, please contact me so I can take the necessary action.