- Name 1 – email@example.com
- Name 2 – email@example.com
- Name 3 – email@example.com
- TA Name 1 (Main Supervisor)
- TA Name 2 (Co-supervisor)
Provide a concise summary of your project, including the type of recommender system you're building, the key techniques used, and a brief two sentence summary of results.
Summarize your key reproducability findings in bullet points.
Summarize your key findings about the extensions you implemented in bullet points.
Define the recommendation task you are solving (e.g., sequential, generative, content-based, collaborative, ranking, etc.). Clearly describe inputs and outputs.
Provide the following for all datasets, including the attributes you are considering to measure things like item fairness (for example):
- Dataset Name
- Pre-processing: e.g., Removed items with fewer than 5 interactions, and users with fewer than 5 interactions
- Subsets considered: e.g., Cold Start (5-10 items)
- Dataset size: # users, # items, sparsity:
- Attributes for user fairness (only include if used):
- Attributes for item fairness (only include if used):
- Attributes for group fairness (only include if used):
- Other attributes (only include if used):
Explain why these metrics are appropriate for your recommendation task and what they are measuring briefly.
- Metric #1
- Description:
Describe each baseline, primary methods, and how they are implemented. Mention tools/frameworks used (e.g., Surprise, LightFM, RecBole, PyTorch). Describe each baseline
Explain your approach in simple terms. Describe your model pipeline: data input → embedding/representation → prediction → ranking. Discuss design choices, such as use of embeddings, neural networks, or attention mechanisms.
List & briefly describe the extensions that you made to the original method, including extending evaluation e.g., other metrics or new datasets considered.