Conference season is here, and RecSys is back. I've been watching the evolution of recommender systems in the last few years, along with my physical attendances at RecSys 2016 and 2018. It's great to see that the research community comes back to a physical conference unlike 2020.
After taking a quick look at the list of accepted papers, for me, one of the biggest trends in 2021 is user-centricity, which focuses on how to allow users to intervene in a recommendation process while minimizing the risk of biases and maximizing diversity & fairness of recommendations. In that sense, a list below highlights the papers that attract me the most:
- An Audit of Misinformation Filter Bubbles on YouTube: Bubble Bursting and Recent Behavior Changes
- The Dual Echo Chamber: Modeling Social Media Polarization for Interventional Recommending
- I Want to Break Free! Recommending Friends from Outside the Echo Chamber
- Towards Unified Metrics for Accuracy and Diversity for Recommender Systems
- “Serving Each User”: Supporting Different Eating Goals Through a Multi-List Recommender Interface
- User Bias in Beyond-Accuracy Measurement of Recommendation Algorithms
Of course, this observation is "biased" by my current personal interest—Ethical challenges in recommender systems—but it's certainly an emerging area for the community as the conference has a dedicated session for "Echo Chambers and Filter Bubbles", "Users in Focus", and "Privacy, Fairness, Bias".
I started seeing such a tendency in the recent couple of years, and it's a good indication that the researchers are trying to go beyond the simple accuracy metrics. It reminds me a podcast episode that Joseph Konstan, one of the legendary professors in the field of recommender systems, emphasized how important defining the right metrics is. That is, quantifying the outcome of recommendation is a big challenge in this domain.
When I published "Understanding Research Trends in Recommender Systems from Word Cloud" in 2017, this type of trend wasn't clear enough, and the papers discussed mainly about "problem" (e.g., review, ranking, product, online) as opposed to "end users" or "eventual impact". But in 2020 and 2021, although many studies are still talking a lot about algorithms and accuracy improvements, the terms "bias", "metric", and "behavior", which complements the lack of foundational consideration and makes the recommenders more user-centered, have clearly arisen.
2020:
2021:
Notice that several papers are aggressively using the word "interaction" in the context of reinforcement learning. It basically enables users/developers to provide explicit feedback (even more explicit than rating feedback) so that humans can navigate a recommender to the right direction.
In other words, bi-directional interactions between humans and systems play an important role to bring the machine-generated recommendations to a higher level. I personally believe it's time to stop building a selfish intelligent system that naively aggregates and analyzes original data points with no care.
Last but not least, I'm looking forward to diving deep into the industrial challenges. As always, RecSys shows a unique balance between theory and practice both from academia and industry. They commonly pose insightful, motivating problem statements that give us an opportunity to rethink about how recommenders should be:
- AIR: Personalized Product Recommender System for Nike's Digital Transformation
- Personalizing Peloton: Combining Rankers and Filters To Balance Engagement and Business Goals
- Recommendations and Results Organization in Netflix Search
- Recommender Systems for Personalized User Experience: Lessons learned at Booking.com
- RecSysOps: Best Practices for Operating a Large-Scale Recommender System
Another interesting read would be "Amazon at RecSys: Evaluation, bias, and algorithms". As the professor highlighted the trends as "evaluation" and "bias" similarly to what I observed, the field finally started focusing more on the essential problems every recommender will can face.
This article is part of the series: Productizing Data with PeopleShare
Categories
See also
- 2022-10-14
- Ethics in Recommendation Pipeline—A First Look at RecSys 2022 Papers
- 2021-11-24
- How Can Recommender Systems Contribute to Mitigate Echo Chambers and Filter Bubbles?
- 2017-11-11
- Understanding Research Trends in Recommender Systems from Word Cloud
Last updated: 2022-09-02
Author: Takuya Kitazawa
Takuya Kitazawa is a freelance software developer based in British Columbia, Canada. As a technologist specializing in AI and data-driven solutions, he has worked globally at Big Tech and start-up companies for a decade. At the intersection of tech and society, he is passionate about promoting the ethical use of information technologies through his mentoring, business consultation, and public engagement activities. See CV for more information, or contact at [email protected].
Now Gift a cup of coffeeDisclaimer
- Opinions are my own and do not represent the views of organizations I am/was belonging to.
- I am doing my best to ensure the accuracy and fair use of the information. However, there might be some errors, outdated information, or biased subjective statements because the main purpose of this blog is to jot down my personal thoughts as soon as possible before conducting an extensive investigation. Visitors understand the limitations and rely on any information at their own risk.
- That said, if there is any issue with the content, please contact me so I can take the necessary action.