Skip to content

Logger is not defined #13104

@teleprint-me

Description

@teleprint-me

Describe the bug

diffusers.quantizers.pipe_quant_config references an undefined logger instance.

Reproduction

Install torch, torchvision, and torchao nightly, then import PipelineQuantizationConfig, TorchAoConfig from diffusers.

Logs

Traceback (most recent call last):
  File ".venv/lib/python3.14/site-packages/diffusers/quantizers/torchao/torchao_quantizer.py", line 90, in _update_torch_safe_globals
    from torchao.dtypes.uintx.uint4_layout import UInt4Tensor
ModuleNotFoundError: No module named 'torchao.dtypes.uintx.uint4_layout'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File ".venv/lib/python3.14/site-packages/diffusers/utils/import_utils.py", line 1016, in _get_module
    return importlib.import_module("." + module_name, self.__name__)
           ~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.14/importlib/__init__.py", line 88, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
           ~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "<frozen importlib._bootstrap>", line 1398, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1371, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1314, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 491, in _call_with_frames_removed
  File "<frozen importlib._bootstrap>", line 1398, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1371, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1342, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 938, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 759, in exec_module
  File "<frozen importlib._bootstrap>", line 491, in _call_with_frames_removed
  File ".venv/lib/python3.14/site-packages/diffusers/quantizers/__init__.py", line 16, in <module>
    from .auto import DiffusersAutoQuantizer
  File ".venv/lib/python3.14/site-packages/diffusers/quantizers/auto.py", line 35, in <module>
    from .torchao import TorchAoHfQuantizer
  File ".venv/lib/python3.14/site-packages/diffusers/quantizers/torchao/__init__.py", line 15, in <module>
    from .torchao_quantizer import TorchAoHfQuantizer
  File ".venv/lib/python3.14/site-packages/diffusers/quantizers/torchao/torchao_quantizer.py", line 111, in <module>
    _update_torch_safe_globals()
    ~~~~~~~~~~~~~~~~~~~~~~~~~~^^
  File ".venv/lib/python3.14/site-packages/diffusers/quantizers/torchao/torchao_quantizer.py", line 96, in _update_torch_safe_globals
    logger.warning(
    ^^^^^^
NameError: name 'logger' is not defined

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "txt2img.py", line 14, in <module>
    from diffusers import PipelineQuantizationConfig, TorchAoConfig, ZImagePipeline
  File "<frozen importlib._bootstrap>", line 1423, in _handle_fromlist
  File ".venv/lib/python3.14/site-packages/diffusers/utils/import_utils.py", line 1006, in __getattr__
    module = self._get_module(self._class_to_module[name])
  File ".venv/lib/python3.14/site-packages/diffusers/utils/import_utils.py", line 1018, in _get_module
    raise RuntimeError(
    ...<2 lines>...
    ) from e
RuntimeError: Failed to import diffusers.quantizers.pipe_quant_config because of the following error (look up to see its traceback):
name 'logger' is not defined

System Info

$ diffusers-cli env

Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.

- 🤗 Diffusers version: 0.36.0
- Platform: Linux-6.12.69-1-lts-x86_64-with-glibc2.42
- Running on Google Colab?: No
- Python version: 3.14.2
- PyTorch version (GPU?): 2.11.0.dev20260207+rocm7.1 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 1.4.1
- Transformers version: 5.1.0
- Accelerate version: 1.12.0
- PEFT version: 0.18.1
- Bitsandbytes version: not installed
- Safetensors version: 0.7.0
- xFormers version: not installed
- Accelerator: NA
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>

Who can help?

I was able to resolve the issue by simply referencing the default logger.

from ...utils.logging import _get_library_root_logger

Then adding the following line after checking for torch and torchao.

logger = _get_library_root_logger()

I'm not sure if this is the right way to fix it, but it the issue is resolved after adding these 2 lines to diffusers/quantizers/torchao/torchao_quantizer.py.

This is a nightly release, so I expect issues to appear from time to time. This seems like a simple oversight and is easy to fix.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions