Skip to content

Fix count_used_parameters_in_backward crash on PyTorch < 2.3 (#7756)#7849

Draft
harshang03 wants to merge 1 commit intodeepspeedai:masterfrom
harshang03:fix/7756-count-used-params-fallback
Draft

Fix count_used_parameters_in_backward crash on PyTorch < 2.3 (#7756)#7849
harshang03 wants to merge 1 commit intodeepspeedai:masterfrom
harshang03:fix/7756-count-used-params-fallback

Conversation

@harshang03
Copy link

The function asserted the presence of internal PyTorch APIs (_get_grad_fn_or_grad_acc, _current_graph_task_id, _will_engine_execute_node) that only exist in PyTorch >= 2.3. On older builds (e.g. 2.1.2), the assert fired unconditionally inside gradient hooks, crashing training with ZeRO stage 1/2/3.

Changes:

  • runtime/utils.py: Replace the hard assert with a graceful fallback that counts all grad-requiring parameters (conservative upper bound) when internal APIs are unavailable.
  • runtime/engine.py: Enable _support_torch_style_backward for all ZeRO optimizers regardless of PyTorch version, since the fallback counting is safe and correct. Remove unused import.
  • base_optimizer.py: No changes needed (already handles missing APIs in queue_post_backward_callback).
  • tests/: Add comprehensive test suite covering fallback behaviour, native path, edge cases, and API availability checks.

Fixes #7756

…edai#7756)

The function asserted the presence of internal PyTorch APIs
(_get_grad_fn_or_grad_acc, _current_graph_task_id,
_will_engine_execute_node) that only exist in PyTorch >= 2.3. On older
builds (e.g. 2.1.2), the assert fired unconditionally inside gradient
hooks, crashing training with ZeRO stage 1/2/3.

Changes:
- runtime/utils.py: Replace the hard assert with a graceful fallback
  that counts all grad-requiring parameters (conservative upper bound)
  when internal APIs are unavailable.
- runtime/engine.py: Enable _support_torch_style_backward for all
  ZeRO optimizers regardless of PyTorch version, since the fallback
  counting is safe and correct. Remove unused import.
- base_optimizer.py: No changes needed (already handles missing APIs
  in queue_post_backward_callback).
- tests/: Add comprehensive test suite covering fallback behaviour,
  native path, edge cases, and API availability checks.

Fixes deepspeedai#7756

Signed-off-by: Harshang Akabari <a.harshang@gmail.com>
Co-authored-by: Cursor <cursoragent@cursor.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[BUG] count_used_parameters_in_backward does not work with PyTorch 2.1.2

1 participant