Conversation
Summary of ChangesHello @anrp, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances the model saving capabilities by introducing a configurable Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a 'save all' feature and refactors the model saving logic into a dedicated save_model function. The changes are generally good, centralizing the saving mechanism and adding useful functionality.
However, there are a few critical issues with how the new save_model function is called, which will lead to runtime errors due to incorrect arguments. I've pointed these out with suggestions for fixes. Additionally, there are a couple of minor issues related to code consistency and adherence to the repository's style guide regarding type annotations. Addressing these points will improve the robustness and maintainability of the code.
05a29ea to
661388e
Compare
|
What problem is this PR intended to solve? I don't think asking the user a single question is so much of a hassle that there needs to be a pre-made setting to skip it, and setting While I can see that in very specific situations, the user might indeed want to save several (though not necessarily all) trials from the Pareto front, this is an absolutely massive footgun that can write Terabytes to the disk. The risk/benefit calculus doesn't work out here IMO. The Pareto front is usually on the order of 10 entries, and in those rare cases where it is desired, the user can manually save all of those in a matter of minutes. |
|
I don't mind deleting the "save-all" capability here, but I would like to preserve the LoRA-for-anything, because it can be used with vllm to serve multiple heretic results with minimal overhead vs merged models all around. It's also something I wrote prior to #106 because prior to having something like that, once you exit all the data is lost, but if we have that PR then there's not much value. |
|
The option to save as LoRA makes sense, but I still don't like that being a setting, because I think the user should always make that choice in-context. |
661388e to
5d57ea0
Compare
|
Removed save all machinery. PTAL? |
| return merge_choice | ||
|
|
||
|
|
||
| def save_model( |
There was a problem hiding this comment.
Why is this a separate function? Seems a bit ad-hoc to rip all the context out into parameters. Where can this be reused?
The trial parameter appears to be unused.
There was a problem hiding this comment.
Mostly it's just the level of indentation, it's quite deep. Removed trial.
5d57ea0 to
f565b9b
Compare
|
Merged, thanks! |
No description provided.