Part III.b: Conversational Recommendation with Keyphrases
Keyphrase-Based Semantic Search and LLM Ranking
Introduction
This notebook demonstrates an advanced conversational recommendation pipeline using keyphrases as semantic intermediaries. The system follows a retrieval → ranking architecture: first, we use fast vector similarity search to retrieve candidate items, then we apply an LLM to re-rank and select the best matches.
Note: This is a single-turn query demo that processes one user query and returns immediate recommendations. It does not implement multi-turn conversational interaction where the system asks follow-up questions or refines preferences over multiple exchanges.
Pipeline Overview:
Generate keyphrases for each movie using LLM
Embed all keyphrases and store in vector database
Convert user queries → keyphrases describing desired movies
Retrieve similar keyphrases → aggregate to candidate movies
Better Retrieval: More query-keyphrase matches than query-movie matches
Interpretability: Understand why a movie was recommended
Flexibility: Different keyphrases match different user needs
Related Work:
This approach combines ideas from conversational recommendation, multi-vector retrieval, and retrieval-augmented generation:
Retrieval-Augmented Generation (RAG): Lewis et al. (2020) introduced RAG for knowledge-intensive NLP tasks, combining parametric (LLM) and non-parametric (retrieval) memory—our pipeline applies this retrieval→generation pattern to recommendations
Conversational RecSys: Zhou et al. (2020) surveys conversational recommender systems that use natural language to understand user preferences
Multi-vector Retrieval: Khattab & Zaharia (2020) introduced ColBERT, using multiple contextualized embeddings per document for late interaction—our keyphrase approach applies similar principles with multiple semantic representations per item
theme_set( theme_minimal()+ theme( plot_title=element_text(weight="bold", size=14), axis_title=element_text(size=12), figure_size=(8, 6), ))pl.Config( fmt_str_lengths=50, tbl_rows=20,)class ConversationalRecSettings(BaseSettings): model_config = SettingsConfigDict(env_prefix="CONV_REC_")# Model selection llm_model: str= Field(default="ministral-3:3b", description="LLM model for generation") embed_model: str= Field(default="nomic-embed-text-v2-moe", description="Embedding model")# Recommendations configuration num_most_popular: int= Field(default=500, description="Top N most-rated movies to sample from") num_sampled: int= Field( default=100, description="Number of movies to sample for keyphrase generation" ) num_retrieved: int= Field( default=10, description="Number of final movies to retrieve and rank" )settings = ConversationalRecSettings()
Show code
movies, ratings, tags, links = load_movielens("../data")posters = pl.read_parquet("../data/shared/posters.parquet")display( Markdown(f"""**Configuration:****Models:**- Generative: {ollama_model_link(settings.llm_model)}- Embedding: {ollama_model_link(settings.embed_model)}**Sampling:**- Most popular movies considered: {settings.num_most_popular}- Movies sampled for keyphrases: {settings.num_sampled}- Final retrieved movies: {settings.num_retrieved}"""))
We’ll select a sample of movies for keyphrase generation:
Show code
# Get most-rated movies to ensure we have familiar contentmost_rated_movies = ( ratings.group_by("movie_id") .len("num_ratings") .top_k(settings.num_most_popular, by="num_ratings"))sample_movies = movies.join(most_rated_movies, on="movie_id", how="semi").sample( n=settings.num_sampled, seed=42)display(Markdown(f"**Selected {len(sample_movies)} movies** for keyphrase generation"))sample_movies.select("title", "genres").head(10)
Selected 100 movies for keyphrase generation
shape: (10, 2)
title
genres
str
list[str]
"Wedding Crashers (2005)"
["Comedy", "Romance"]
"Army of Darkness (1993)"
["Action", "Adventure", … "Horror"]
"Mad Max: Fury Road (2015)"
["Action", "Adventure", … "Thriller"]
"Royal Tenenbaums, The (2001)"
["Comedy", "Drama"]
"Shaun of the Dead (2004)"
["Comedy", "Horror"]
"RoboCop (1987)"
["Action", "Crime", … "Thriller"]
"Speed (1994)"
["Action", "Romance", "Thriller"]
"Magnolia (1999)"
["Drama"]
"Dr. Strangelove or: How I Learned to Stop Worrying…
["Comedy", "War"]
"Limitless (2011)"
["Sci-Fi", "Thriller"]
Generate Keyphrases for Each Movie
We’ll use the LLM to generate descriptive keyphrases for each movie:
Show code
@retry(3, exceptions=(ValueError, json.JSONDecodeError))def generate_movie_keyphrases(title, genres):"""Generate keyphrases describing a movie. Keyphrases capture: mood, themes, style, audience, plot elements, emotions, etc. Returns: dict with 'keyphrases' key containing list of strings """ prompt =f"""\You are a film expert. Generate 10-15 short keyphrases (2-4 words each) that describe this movie.Title: {title}Genres: {", ".join(genres) if genres else"Unknown"}Include keyphrases for:- Mood/Atmosphere (e.g., "uplifting", "dark comedy", "suspenseful")- Themes (e.g., "family bonds", "redemption", "coming-of-age")- Visual Style (e.g., "visually stunning", "gritty realism")- Target Audience (e.g., "family-friendly", "thriller fans")- Emotions (e.g., "heartwarming", "tense", "inspiring")- Plot Elements (e.g., "time travel", "heist", "love triangle")- Similar Movies/Genres (e.g., "like Inception", "sci-fi thriller")Output ONLY valid JSON:{{ "keyphrases": ["phrase1", "phrase2", ..., "phrase8"]}}""" result = ollama_generate_json(prompt, model=settings.llm_model, temperature=0.5)if"keyphrases"notin result ornotisinstance(result["keyphrases"], list):raiseValueError("Invalid keyphrase format")iflen(result["keyphrases"]) <8:raiseValueError(f"Not enough keyphrases: {len(result['keyphrases'])}")return result# Test with one movietest_movie = sample_movies.to_dicts()[0]test_result = generate_movie_keyphrases(test_movie["title"], test_movie["genres"])display(Markdown(f"**Test: {test_movie['title']}**"))show_response(test_result)
# Generate keyphrases for all sampled moviesall_keyphrases = []print(f"Generating keyphrases for {len(sample_movies)} movies...")for i, movie inenumerate(sample_movies.to_dicts()):print(f" [{i +1}/{len(sample_movies)}] {movie['title']}") result = generate_movie_keyphrases(movie["title"], movie["genres"]) all_keyphrases.append(result)keyphrases_df = pl.DataFrame(all_keyphrases)print("\n✅ Keyphrase generation complete!")# Combine with movie dataenriched_movies = pl.concat([sample_movies, keyphrases_df], how="horizontal")# Display statstotal_keyphrases =sum(len(kp) for kp in enriched_movies["keyphrases"].to_list())avg_keyphrases = total_keyphrases /len(enriched_movies)display( Markdown(f"""**Keyphrase Statistics:**- Total movies: {len(enriched_movies)}- Total keyphrases: {total_keyphrases:,}- Average keyphrases per movie: {avg_keyphrases:.1f}"""))
Generating keyphrases for 100 movies...
[1/100] Wedding Crashers (2005)
[2/100] Army of Darkness (1993)
[3/100] Mad Max: Fury Road (2015)
[4/100] Royal Tenenbaums, The (2001)
[5/100] Shaun of the Dead (2004)
[6/100] RoboCop (1987)
[7/100] Speed (1994)
[8/100] Magnolia (1999)
[9/100] Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb (1964)
[10/100] Limitless (2011)
[11/100] Ghostbusters (a.k.a. Ghost Busters) (1984)
[12/100] Pirates of the Caribbean: At World's End (2007)
[13/100] Training Day (2001)
[14/100] Natural Born Killers (1994)
[15/100] Air Force One (1997)
[16/100] Arachnophobia (1990)
[17/100] Vertigo (1958)
[18/100] Dumb & Dumber (Dumb and Dumber) (1994)
[19/100] Fight Club (1999)
[20/100] Back to the Future Part III (1990)
[21/100] Dances with Wolves (1990)
[22/100] There Will Be Blood (2007)
[23/100] O Brother, Where Art Thou? (2000)
[24/100] Star Wars: Episode I - The Phantom Menace (1999)
[25/100] V for Vendetta (2006)
[26/100] True Romance (1993)
[27/100] As Good as It Gets (1997)
[28/100] Looper (2012)
[29/100] Rocky Horror Picture Show, The (1975)
[30/100] Kung Fu Panda (2008)
[31/100] Addams Family Values (1993)
[32/100] Chocolat (2000)
[33/100] Catch Me If You Can (2002)
[34/100] Godfather: Part II, The (1974)
[35/100] Star Wars: Episode VII - The Force Awakens (2015)
[36/100] Lord of the Rings: The Two Towers, The (2002)
[37/100] Charlie and the Chocolate Factory (2005)
[38/100] Dark City (1998)
[39/100] Thomas Crown Affair, The (1999)
[40/100] Toy Story (1995)
[41/100] I Am Legend (2007)
[42/100] Godfather, The (1972)
[43/100] My Cousin Vinny (1992)
[44/100] High Fidelity (2000)
[45/100] Maverick (1994)
[46/100] Rocky (1976)
[47/100] Shrek (2001)
[48/100] Erin Brockovich (2000)
[49/100] 50 First Dates (2004)
[50/100] Goodfellas (1990)
[51/100] Love Actually (2003)
[52/100] 10 Things I Hate About You (1999)
[53/100] Edward Scissorhands (1990)
[54/100] Little Miss Sunshine (2006)
[55/100] Broken Arrow (1996)
[56/100] Willy Wonka & the Chocolate Factory (1971)
[57/100] Matrix Revolutions, The (2003)
[58/100] Platoon (1986)
[59/100] Donnie Darko (2001)
[60/100] Robin Hood: Men in Tights (1993)
[61/100] Monsters, Inc. (2001)
[62/100] Gone Girl (2014)
[63/100] No Country for Old Men (2007)
[64/100] Edge of Tomorrow (2014)
[65/100] Whiplash (2014)
[66/100] Legally Blonde (2001)
[67/100] Requiem for a Dream (2000)
[68/100] Borat: Cultural Learnings of America for Make Benefit Glorious Nation of Kazakhstan (2006)
[69/100] Fantasia (1940)
[70/100] Hot Shots! Part Deux (1993)
[71/100] Silence of the Lambs, The (1991)
[72/100] Guardians of the Galaxy (2014)
[73/100] Watchmen (2009)
[74/100] 2001: A Space Odyssey (1968)
[75/100] Star Wars: Episode V - The Empire Strikes Back (1980)
[76/100] Romancing the Stone (1984)
[77/100] Nightmare Before Christmas, The (1993)
[78/100] Grosse Pointe Blank (1997)
[79/100] The Martian (2015)
[80/100] Chasing Amy (1997)
[81/100] Star Wars: Episode II - Attack of the Clones (2002)
[82/100] Seven Samurai (Shichinin no samurai) (1954)
[83/100] Aliens (1986)
[84/100] Mars Attacks! (1996)
[85/100] Aladdin (1992)
[86/100] Much Ado About Nothing (1993)
[87/100] Face/Off (1997)
[88/100] Armageddon (1998)
[89/100] Star Wars: Episode IV - A New Hope (1977)
[90/100] Ice Age (2002)
[91/100] Iron Man 2 (2010)
[92/100] Top Gun (1986)
[93/100] Heat (1995)
[94/100] Scarface (1983)
[95/100] Into the Wild (2007)
[96/100] Wayne's World (1992)
[97/100] Monty Python and the Holy Grail (1975)
[98/100] Quiz Show (1994)
[99/100] Raising Arizona (1987)
[100/100] Chinatown (1974)
✅ Keyphrase generation complete!
Keyphrase Statistics:
Total movies: 100
Total keyphrases: 1,684
Average keyphrases per movie: 16.8
Show code
# Show examples of generated keyphrasesdisplay(Markdown("### Example Keyphrases for 3 Movies\n"))for movie_dict in enriched_movies.head(3).to_dicts(): phrases = movie_dict["keyphrases"][:10] # Show first 10 display(Markdown(f"**{movie_dict['title']}** \n{', '.join(phrases)}, ...\n"))
Now we’ll embed all keyphrases and create a searchable index:
Show code
# Create a flat list of all keyphrases with movie associationskeyphrase_index = []for movie_dict in enriched_movies.to_dicts(): movie_id = movie_dict["movie_id"] title = movie_dict["title"]for phrase in movie_dict["keyphrases"]: keyphrase_index.append({"movie_id": movie_id, "title": title, "keyphrase": phrase})keyphrase_df = pl.DataFrame(keyphrase_index)display(Markdown(f"**Created keyphrase index with {len(keyphrase_df):,} entries**"))keyphrase_df.head(20)
Created keyphrase index with 1,684 entries
shape: (20, 3)
movie_id
title
keyphrase
i64
str
str
34162
"Wedding Crashers (2005)"
"satirical wedding chaos"
34162
"Wedding Crashers (2005)"
"lighthearted rom-com comedy"
34162
"Wedding Crashers (2005)"
"fake identities & disguises"
34162
"Wedding Crashers (2005)"
"love triangles & rival suitors"
34162
"Wedding Crashers (2005)"
"charming small-town vibes"
34162
"Wedding Crashers (2005)"
"mockumentary-style humor"
34162
"Wedding Crashers (2005)"
"fake marriage schemes"
34162
"Wedding Crashers (2005)"
"wedding crashers’ antics"
34162
"Wedding Crashers (2005)"
"romantic misadventures"
34162
"Wedding Crashers (2005)"
"playful social satire"
34162
"Wedding Crashers (2005)"
"fake celebrity impersonations"
34162
"Wedding Crashers (2005)"
"cozy yet chaotic atmosphere"
34162
"Wedding Crashers (2005)"
"romantic comedy tropes"
34162
"Wedding Crashers (2005)"
"fake engagement & deception"
34162
"Wedding Crashers (2005)"
"wedding crashers’ redemption arc"
34162
"Wedding Crashers (2005)"
"rom-com with heart"
34162
"Wedding Crashers (2005)"
"blended love & humor"
34162
"Wedding Crashers (2005)"
"fake family dynamics"
34162
"Wedding Crashers (2005)"
"lighthearted escapism"
1215
"Army of Darkness (1993)"
"Dark comedy horror"
Show code
# Embed all keyphrases in batchesprint("Embedding keyphrases...")all_keyphrase_texts = keyphrase_df["keyphrase"].to_list()batch_size =50keyphrase_embeddings = []for i, batch inenumerate(itertools.batched(all_keyphrase_texts, batch_size)):if i %10==0:print(f" Batch {i +1}/{len(all_keyphrase_texts) // batch_size +1}") batch_embeddings = ollama_embed(list(batch), model=settings.embed_model) keyphrase_embeddings.extend(batch_embeddings)keyphrase_matrix = np.array(keyphrase_embeddings)print(f"\n✅ Embedding complete!")print(f" Shape: {keyphrase_matrix.shape}")
Convert natural language queries into descriptive keyphrases:
Show code
@retry(3, exceptions=(ValueError, json.JSONDecodeError))def generate_query_keyphrases(query):"""Convert user query into keyphrases describing desired movie. Args: query: Natural language description of what user wants Returns: dict with 'keyphrases' key containing list of strings """ prompt =f"""\You are a movie recommendation expert. A user describes what they want to watch.Generate 3-5 short keyphrases (1-4 words each) that describe the type of movie they're looking for.User Query: "{query}"Consider their mood, preferences, constraints, and desires.Include keyphrases for mood, themes, style, audience, emotions, and plot elements.Output ONLY valid JSON:{{ "keyphrases": ["phrase1", "phrase2", ..., "phrase7"]}}""" result = ollama_generate_json(prompt, model=settings.llm_model, temperature=0.5)if"keyphrases"notin result ornotisinstance(result["keyphrases"], list):raiseValueError("Invalid keyphrase format")iflen(result["keyphrases"]) <2:raiseValueError(f"Not enough keyphrases: {len(result['keyphrases'])}")return result# Test with example queryuser_query ="I'm tired after a long day. Want something calm, uplifting, maybe a bit nostalgic. No intense action or horror."# user_query = "Looking for something funny and lighthearted for a date night. Nothing too long or serious."query_result = generate_query_keyphrases(user_query)query_phrases = query_result["keyphrases"]display( Markdown(f"""\**User Query:**_{user_query}_**Query keyphrases:**{", ".join(query_phrases)}"""))
User Query:I’m tired after a long day. Want something calm, uplifting, maybe a bit nostalgic. No intense action or horror.
Query keyphrases: cozy small-town vibes, emotional uplift, nostalgic coming-of-age, gentle drama with heart, lighthearted family warmth, soft visual storytelling, wholesome redemption arc
Similarity Search & Movie Aggregation
Now embed the query keyphrases to enable similarity comparison:
Compute similarities and find the best matching movie keyphrases for each query keyphrase:
Show code
# Compute similarities between query phrases and all movie keyphrases# Shape: (num_query_phrases, num_all_keyphrases)similarities = cosine_similarity(query_matrix, keyphrase_matrix)print(f"Similarity matrix shape: {similarities.shape}")print(f" - {similarities.shape[0]} query keyphrases")print(f" - {similarities.shape[1]} movie keyphrases")top_k_phrases_per_query =30# For each query phrase, get top-k similar movie keyphrases# Track: query phrase, movie phrase, movie_id, similarityall_matches = []for query_idx, query_phrase inenumerate(query_phrases):# Get top matches for this query phrase phrase_similarities = similarities[query_idx] top_indices = np.argsort(phrase_similarities)[-top_k_phrases_per_query:][::-1]for idx in top_indices:# Convert numpy int to Python int for Polars indexing idx_int =int(idx) movie_id = keyphrase_df["movie_id"][idx_int] movie_phrase = keyphrase_df["keyphrase"][idx_int] similarity_score =float(phrase_similarities[idx]) all_matches.append( {"query_phrase": query_phrase,"movie_phrase": movie_phrase,"movie_id": movie_id,"similarity": similarity_score, } )matches_df = pl.DataFrame(all_matches)print(f"\nTotal matches: {len(matches_df)}")
Similarity matrix shape: (7, 1684)
- 7 query keyphrases
- 1684 movie keyphrases
Total matches: 210
Group matches by movie and find the best matching keyphrases for each. The matching_keyphrases column shows which movie keyphrases matched the user’s query:
Show code
# For each movie, get the best matching keyphrasesmovie_best_matches = ( matches_df.sort(["movie_id", "similarity"], descending=[False, True]) .group_by("movie_id") .agg( [ pl.col("similarity").max().alias("max_similarity"), pl.col("movie_phrase").first().alias("top_movie_phrase"), pl.col("query_phrase").first().alias("top_query_phrase"),# Collect top 3 matching movie keyphrases pl.col("movie_phrase").head(3).alias("matching_keyphrases"), ] ) .sort("max_similarity", descending=True) .head(settings.num_retrieved))# Join with movie titlescandidate_movies = movie_best_matches.join( movies.select(["movie_id", "title"]), on="movie_id").select(["movie_id", "title", "matching_keyphrases", "max_similarity"])display(Markdown(f"### Top {len(candidate_movies)} Candidate Movies from Keyphrase Search\n"))candidate_movies
["charming small-town vibes", "rom-com with heart", "wedding crashers’ redemption arc"]
0.928356
53125
"Pirates of the Caribbean: At World's End (2007)"
["pirate redemption arc"]
0.759924
LLM Ranking
Use the LLM to re-rank and filter candidates based on the original user query:
Show code
@retry(3, exceptions=(ValueError, json.JSONDecodeError))def rank_movies_with_llm(query, candidate_movies_df):"""Use LLM to rank candidate movies based on user query. Args: query: Original natural language query candidate_movies_df: DataFrame with candidate movies (must have 'movie_id' and 'title' columns) Returns: List of dicts with ranked movies and explanations """# Prepare movie list for prompt movie_list = []for movie_dict in candidate_movies_df.to_dicts(): movie_list.append(f"movie_id: {movie_dict['movie_id']}, title: {movie_dict['title']}") movies_text ="\n".join(movie_list) prompt =f"""\You are a movie recommendation expert. A user wants a movie recommendation.User Query: "{query}"Candidate Movies:{movies_text}Task:1. Select the top 3 movies that best match the user's query2. Rank them from best to worst (1 = best)3. Provide a brief reason for each recommended movie4. Only include movies that are truly a good matchOutput ONLY valid JSON in this format:{{ "recommendations": [{{ "rank": 1, "movie_id": 123, "reason": "Brief explanation of why this matches the query"}}, ... ]}}""" result = ollama_generate_json(prompt, model=settings.llm_model, temperature=0.3)if"recommendations"notin result ornotisinstance(result["recommendations"], list):raiseValueError("Invalid ranking format")return result# Rank the candidatesranking_result = rank_movies_with_llm(user_query, candidate_movies)display(Markdown("### LLM-Ranked Recommendations\n"))show_response(ranking_result)
LLM-Ranked Recommendations
LLM Response:
{
"recommendations": [
{
"rank": 1,
"movie_id": 6942,
"reason": "Uplifting and nostalgic with a lighthearted, romantic tone, perfect for unwinding after a long day. The film\u2019s warm, heartfelt storytelling and gentle humor align well with the user\u2019s preference for calmness and nostalgia."
},
{
"rank": 2,
"movie_id": 4014,
"reason": "A visually charming and emotionally uplifting film with a nostalgic touch, focusing on love, family, and small joys. Its gentle pace and sweet romance make it ideal for relaxation."
},
{
"rank": 3,
"movie_id": 1784,
"reason": "While not entirely nostalgic, *As Good as It Gets* delivers warmth, humor, and emotional uplift through its quirky protagonist and heartwarming moments. Its relaxed pace and uplifting themes fit the user\u2019s preference for calmness."
}
]
}
Extract the ranked movie IDs and prepare the final recommendations:
"Uplifting and nostalgic with a lighthearted, roman…
"Love Actually (2003)"
2
4014
"A visually charming and emotionally uplifting film…
"Chocolat (2000)"
3
1784
"While not entirely nostalgic, *As Good as It Gets*…
"As Good as It Gets (1997)"
Display with Posters
Finally, show the recommendations with movie posters:
Show code
# Remind us with the querydisplay( Markdown(f"""\**User Query:**_{user_query}_"""))# Join with posters (via links to get tmdb_id)recs_with_posters = final_recommendations.join( links.select(["movie_id", "tmdb_id"]), on="movie_id").join(posters, on="tmdb_id", how="inner", maintain_order="left")# Display each recommendation with posterdisplay(Markdown("## 🎬 Your Personalized Recommendations\n"))for rec in recs_with_posters.to_dicts(): display(Markdown(f"### {rec['rank']}. {rec['title']}\n")) display(Markdown(f"_{rec['reason']}_\n")) display(tmdb_images([rec["poster_path"]])) display(Markdown("---\n"))
User Query:I’m tired after a long day. Want something calm, uplifting, maybe a bit nostalgic. No intense action or horror.
🎬 Your Personalized Recommendations
3. As Good as It Gets (1997)
While not entirely nostalgic, As Good as It Gets delivers warmth, humor, and emotional uplift through its quirky protagonist and heartwarming moments. Its relaxed pace and uplifting themes fit the user’s preference for calmness.
2. Chocolat (2000)
A visually charming and emotionally uplifting film with a nostalgic touch, focusing on love, family, and small joys. Its gentle pace and sweet romance make it ideal for relaxation.
1. Love Actually (2003)
Uplifting and nostalgic with a lighthearted, romantic tone, perfect for unwinding after a long day. The film’s warm, heartfelt storytelling and gentle humor align well with the user’s preference for calmness and nostalgia.
Try these example queries:
“I want a mind-bending thriller that makes me think. Something with plot twists and mystery.”
“Something uplifting and inspiring for a Sunday afternoon. Family-friendly.”
“Dark, gritty crime drama with complex characters.”
“Romantic comedy that’s actually funny, not too cheesy.”
Key Takeaways
Why Multi-Vector Keyphrases Work:
Richer matching: 20-30 keyphrase embeddings per movie vs. 1 single embedding → better retrieval recall
Explainability: Show which movie aspects matched the user’s query
Semantic bridge: LLM-generated keyphrases translate user intent into searchable item features
Inspired by ColBERT(Khattab & Zaharia, 2020): Multiple embeddings per item with late interaction, but using interpretable semantic phrases
Efficient Retrieval → Ranking Architecture:
Fast retrieval: Vector similarity search finds 10 candidates from keyphrase matches
LLM ranking: Re-rank only the candidates (not entire catalog) for final top 3
Limitations & Extensions:
This is a single-turn demo, not multi-turn conversation
Could add: keyphrase weighting, hybrid scoring with collaborative filtering, personalization from user history, temporal/trending keyphrases
Khattab, O., & Zaharia, M. (2020). ColBERT: Efficient and effective passage search via contextualized late interaction over BERT. Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’20, 39–48. https://doi.org/10.1145/3397271.3401075
Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., Küttler, H., Lewis, M., Yih, W., Rocktäschel, T., Riedel, S., & Kiela, D. (2020). Retrieval-augmented generation for knowledge-intensive NLP tasks. Advances in Neural Information Processing Systems, 33, 9459–9474. https://arxiv.org/abs/2005.11401
Zhou, K., Zhou, Y., Zhao, W. X., Wang, X., & Wen, J.-R. (2020). A survey on conversational recommender systems. ACM Computing Surveys, 54(4), 1–36. https://doi.org/10.1145/3453154