I’ve been keeping up with the classics (NCF, Wide & Deep, LightGCN), but the field seems to have shifted dramatically in the last 18–24 months toward LLM-based reasoning and graph-based retrieval at scale.
I’m looking for the "state of the art" in 2026. Specifically:
LLM4Rec: Beyond just using LLMs for feature engineering—who is doing generative recommendation well?
Retrieval vs. Ranking: Any new breakthroughs in the "Two-Tower" paradigm or vector database integration?
Real-world Scale: Papers that address the latency/cost trade-offs of these newer, heavier models.
What has been the most influential paper you’ve read recently that changed how you think about discovery?