Skip to content

Conversation

@Sneakr
Copy link

@Sneakr Sneakr commented Dec 18, 2025

Adds (If True, set alpha ~ sqrt(r) for correct scaling) to rslora comments for clarify.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Sneakr, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on enhancing the clarity and usability of the codebase by updating documentation comments. Specifically, it provides explicit instructions for setting the alpha parameter when rslora is activated, ensuring users correctly configure the scaling for rank-stabilized LoRA. This change improves the developer experience by offering immediate, context-sensitive guidance.

Highlights

  • Improved Documentation for rslora: The comments for the use_rslora parameter have been updated across numerous notebooks and Python scripts to include crucial guidance: (If True, set alpha ~ sqrt(r) for correct scaling). This clarifies the recommended scaling for rank-stabilized LoRA when enabled.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request enhances the clarity of the rslora configuration comment across a large number of Jupyter notebooks and Python scripts. The change adds a helpful tip about setting alpha when use_rslora is enabled and standardizes the term to 'rank-stabilized LoRA'. While the change is beneficial, the resulting comment makes many lines exceed the recommended length, impacting readability. I've added suggestions to shorten the comment for better adherence to style guidelines.

" use_gradient_checkpointing = \"unsloth\", # True or \"unsloth\" for very long context\n",
" random_state = 3407,\n",
" use_rslora = False, # We support rank stabilized LoRA\n",
" use_rslora = False, # We support rank-stabilized LoRA (If True, set alpha ~ sqrt(r) for correct scaling)\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The code within this notebook cell string contains a very long line, which can harm readability when viewed as code. To improve this, I suggest shortening the comment. This suggestion applies to all other notebooks modified in this PR.

Suggested change
" use_rslora = False, # We support rank-stabilized LoRA (If True, set alpha ~ sqrt(r) for correct scaling)\n",
" use_rslora = False, # RsLoRA: For correct scaling, set alpha ~ sqrt(r) if True.\n",

use_gradient_checkpointing = "unsloth", # True or "unsloth" for very long context
random_state = 3407,
use_rslora = False, # We support rank stabilized LoRA
use_rslora = False, # We support rank-stabilized LoRA (If True, set alpha ~ sqrt(r) for correct scaling)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This line is quite long (over 110 characters), which exceeds the common recommendation of 88 characters per line from style guides like PEP 8 / black. Long lines can be harder to read. To improve readability, I suggest shortening the comment. This suggestion is applicable to all other Python scripts modified in this PR.

Suggested change
use_rslora = False, # We support rank-stabilized LoRA (If True, set alpha ~ sqrt(r) for correct scaling)
use_rslora = False, # RsLoRA: For correct scaling, set alpha ~ sqrt(r) if True.

@Sneakr
Copy link
Author

Sneakr commented Dec 18, 2025

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request improves comment clarity for rslora and loftq_config parameters across a large number of notebooks and Python scripts. The new comment for rslora is a significant improvement, providing useful guidance on its usage. The changes are consistent and improve code readability and maintainability. However, a critical issue has been identified: the loftq_update_log.txt file, which appears to be a local script execution log, has been added. This file contains a local file path, posing a security concern, and should be removed. For better clarity, I also suggest updating the pull request title and description to reflect all the changes made, including the updates to loftq_config comments.

Comment on lines 1 to 3
Comment updated to: We support LoftQ
Run at: 2025-12-18T02:47:41
Root: C:\Users\HZ\Documents\Dev\unsloth\notebooks
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-critical critical

This log file appears to be a generated artifact from a local script and should not be committed to the repository.

It contains sensitive information, such as the local file path on your machine on line 3: C:\\Users\\HZ\\Documents\\Dev\\unsloth\\notebooks. Exposing such information is a security risk.

Please remove this file from the pull request. To prevent this from happening again, consider adding loftq_update_log.txt or a more general pattern like *.txt or *.log to your .gitignore file.

@Sneakr
Copy link
Author

Sneakr commented Dec 18, 2025

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request systematically improves the clarity of configuration options across a large number of notebooks and Python scripts. The primary change enhances the comment for the use_rslora parameter, providing crucial information about scaling alpha. Additionally, the changes include stylistic improvements such as standardizing comments for loftq_config and correcting spacing and alignment, which significantly boosts code readability and maintainability. The updates are consistent and well-executed. Great work!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant