Skip to content

Conversation

@Aksinya-Bykova
Copy link
Collaborator

No description provided.

Aligned SASRec in-batch training with Google's 'Sampling-Bias-Corrected Neural Modeling' (2019) and the reference 'Golden Code' implementation.

Key changes:
1. Mathematical Correction: Updated SamplesSoftmaxLoss to subtract logQ from BOTH positive and negative scores (Eq. 3).
2. False Negative Masking: Added masking logic to neutralize logits where positive items accidentally appear in the negative pool.
3. Architecture: Modified SasRecInBatchModel to return item IDs and updated loss factory to support global frequency loading via 'path_to_item_counts'.
Refined the core logic of SamplesSoftmaxLoss.forward to align with
theoretical requirements and the "golden" SASRec implementation.

Changes:
1. False Negative Masking: Added ID-based logic to identify and neutralize
   logits where a positive item accidentally appears as a negative sample
   within the same batch. This prevents contradictory gradient signals.
2. Unbiased LogQ: Implemented Eq. 3 from the Google paper, ensuring that
   log-frequencies are subtracted from BOTH positive and negative scores
   to maintain an unbiased estimator.
3. Documentation: Added detailed English comments referencing the
   "Sampling-Bias-Corrected Neural Modeling" paper (2019).
"Sampling-Bias-Corrected Neural Modeling" (2019) and reference implementations.

Key changes:
1. Mathematical Correctness: Updated forward pass to apply LogQ correction
   to BOTH positive and negative scores (unbiased estimator per Google Eq. 3).
2. Training Stability: Implemented False Negative Masking using item IDs
   to neutralize target items that accidentally appear in the negative pool.
3. Data Integration: Added 'path_to_item_counts' support to create_from_config.
   The loss now pre-loads and pre-computes log-frequencies from a pickle file.
4. Robustness: Added device management for log_counts and fallback logic
   for dynamic LogQ inputs.
5. Infrastructure: Integrated logging to track frequency table initialization
.
Created a specialized configuration for SASRec on the Clothing dataset
to test the implemented LogQ correction and FN masking.
- Model set to 'sasrec_in_batch' to support ID pass-through.
- Loss switched to 'sampled_softmax' with 'use_logq_correction' enabled.
- Linked 'path_to_item_counts' to popularity statistics.
- Configured evaluation metrics for consistency with MCLSR benchmarks.
Modified MCLSRModel.forward return dictionary to include raw indices:
- Added 'positive_ids' and 'negative_ids' to support unbiased LogQ correction.
- Added 'user_ids' to enable potential user-level frequency correction.
This change allows SamplesSoftmaxLoss to correctly map embeddings back
to their popularity statistics and perform False Negative Masking,
bringing the MCLSR training pipeline in line with the SASRec improvements.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants