Skip to content

Fix LoRA merge for sharded Gemma 3 checkpoints and HF parameter mapping#1336

Open
ayehninnkhine wants to merge 2 commits intogoogle:mainfrom
ayehninnkhine:main
Open

Fix LoRA merge for sharded Gemma 3 checkpoints and HF parameter mapping#1336
ayehninnkhine wants to merge 2 commits intogoogle:mainfrom
ayehninnkhine:main

Conversation

@ayehninnkhine
Copy link
Copy Markdown

Summary

This PR fixes LoRA merge failures with Hugging Face Gemma 3 checkpoints.

Changes

  • Added support for sharded safetensors (model.safetensors.index.json)
  • Introduced mapping from Tunix parameter names → HF parameter names
  • Replaced strict assertions with safe fallback to skip unmatched layers

Current implementation assumes:

  • single-file checkpoints
  • matching parameter naming

This causes errors when using modern HF models (e.g., Gemma 3 4B). This PR ensures compatibility.

Added load_base_state function to handle loading of model states from single or sharded safetensors files. Updated save_lora_merged_model_as_safetensors to use the new load_base_state function.
@google-cla
Copy link
Copy Markdown

google-cla bot commented Apr 1, 2026

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants