-
Notifications
You must be signed in to change notification settings - Fork 245
feat(pathfinder): add CTK root canary probe for non-standard-path libs #1595
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Auto-sync is disabled for ready for review pull requests in this repository. Workflows must be run manually. Contributors can view more details about this message here. |
|
/ok to test |
1 similar comment
|
/ok to test |
Libraries like nvvm whose shared object lives in a subdirectory (/nvvm/lib64/) that is not on the system linker path cannot be found via bare dlopen on system CTK installs without CUDA_HOME. Add a "canary probe" search step: when direct system search fails, system-load a well-known CTK lib that IS on the linker path (cudart), derive the CTK installation root from its resolved path, and look for the target lib relative to that root via the existing anchor-point logic. The mechanism is generic -- any future lib with a non-standard path just needs its entry in _find_lib_dir_using_anchor_point. The canary probe is intentionally placed after CUDA_HOME in the search cascade to preserve backward compatibility: users who have CUDA_HOME set expect it to be authoritative, and existing code relying on that ordering should not silently change behavior. Co-authored-by: Cursor <cursoragent@cursor.com>
Co-authored-by: Cursor <cursoragent@cursor.com>
e4066be to
44c0abd
Compare
|
/ok to test |
Co-authored-by: Cursor <cursoragent@cursor.com>
|
/ok to test |
|
/ok to test |
|
|
||
|
|
||
| def test_derive_ctk_root_windows_ctk13(): | ||
| path = r"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v13.0\bin\x64\cudart64_13.dll" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This works cross-platform due to explicit use of ntpath in _derive_ctk_root_windows. Given that the code won't look much different using the platform specific version versus not, it seems somewhat useful to have these around instead of having to skip a bunch of tests based on platform.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR adds a CTK root canary probe feature to the pathfinder library to resolve libraries that live in non-standard subdirectories (like libnvvm.so under $CTK_ROOT/nvvm/lib64/). The canary probe discovers the CUDA Toolkit installation root by loading a well-known library (cudart) that IS on the system linker path, deriving the CTK root from its resolved path, and then searching for the target library relative to that root.
Changes:
- Adds canary probe mechanism as a last-resort fallback after CUDA_HOME in the library search cascade
- Introduces CTK root derivation functions for Linux and Windows that extract installation paths from resolved library paths
- Provides comprehensive test coverage (21 tests) for all edge cases and search order behavior
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated no comments.
| File | Description |
|---|---|
cuda_pathfinder/tests/test_ctk_root_discovery.py |
Comprehensive test suite covering CTK root derivation, canary probe mechanism, and search order priority |
cuda_pathfinder/cuda/pathfinder/_dynamic_libs/load_nvidia_dynamic_lib.py |
Implements the canary probe function and integrates it into the library loading cascade after CUDA_HOME |
cuda_pathfinder/cuda/pathfinder/_dynamic_libs/find_nvidia_dynamic_lib.py |
Adds CTK root derivation functions and try_via_ctk_root method to leverage existing anchor-point search logic |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
Tests that create fake CTK directory layouts were hardcoded to Linux paths (lib64/, libnvvm.so) and failed on Windows where the code expects Windows layouts (bin/, nvvm64.dll). Extract platform-aware helpers (_create_nvvm_in_ctk, _create_cudart_in_ctk, _fake_canary_path) that create the right layout and filenames based on IS_WINDOWS. Co-authored-by: Cursor <cursoragent@cursor.com>
|
/ok to test |
|
/ok to test |
The rel_paths for nvvm use forward slashes (e.g. "nvvm/bin") which os.path.join on Windows doesn't normalize, producing mixed-separator paths like "...\nvvm/bin\nvvm64.dll". Apply os.path.normpath to the returned directory so all separators are consistent. Co-authored-by: Cursor <cursoragent@cursor.com>
|
/ok to test |
rwgk
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This approach is very similar to what I had back in May 2025 while working on PR #604, but at the time @leofang was strongly opposed to it, and I backed it out.
I still believe the usefulness/pitfall factor is very high for this approach. Leo, what's your opinion now?
If Leo is supportive, I believe it'll be best to import the anchor library (cudart) in a subprocess, to not introduce potentially surprising side-effects in the current process. The original code for that was another point of contention back in May 2025 (it was using subprocess), but in the meantime I addressed those concerns and the current implementation has gone through several rounds of extensive testing (QA) without any modifications for many months. We could easily move it to cuda/pathfinder/_utils.
|
I think if there's opposition to this approach, which I believe we all discussed on the same call, then if we still care about solving the problem (I think we do), then I'd kindly request a counter-PR implementing an alternative proposal. Otherwise, we're going to get bogged down in reviewing waiting for the perfect solution. AFAICT, there is no perfect way to solve this problem: it's just picking the least worst option. |
|
The subprocess approach is not my favorite, but if we're prioritizing isolation, I don't see a less complex option for getting the canary search path. |
|
One argument in favor of this approach is that the search is a last ditch effort after everything has failed, and it's backward compatible. You might argue that canary searches are a form of system search, but I actually kept the existing priority in order to reduce the foot-bazooka of this whole thing by keeping existing installations behaving exactly the same. |
I think the isolation is important. If we use caching of the I agree with everything else you wrote above. |
Resolve CTK canary absolute paths in a spawned Python process so probing cudart does not mutate loader state in the caller process while preserving the nvvm discovery fallback order. Keep JSON as the child-to-parent wire format because it cleanly represents both path and no-result states and avoids fragile stdout/path parsing across platforms. Co-authored-by: Cursor <cursoragent@cursor.com>
Make canary subprocess path extraction explicitly typed and validated so mypy does not treat platform-specific loader results as Any while keeping probe behavior unchanged. Keep import ordering aligned with Ruff so pre-commit is green. Co-authored-by: Cursor <cursoragent@cursor.com>
|
/ok to test |
|
I added the subprocess isolation. |
Problem
libnvvm.solives under$CTK_ROOT/nvvm/lib64/(ornvvm/binon Windows), which is not on the default loader path. On bare system CTK installs,dlopen("libnvvm.so.4")can fail whenCUDA_HOME/CUDA_PATHis unset even though nvvm is installed.Solution
Keep the CTK-root canary strategy, but run the canary load in a subprocess:
python -m cuda.pathfinder._dynamic_libs.canary_probe_subprocess cudartload_with_system_search("cudart")and prints a JSON payload with the resolved absolute path (ornull)This avoids polluting loader state in the caller process while preserving the existing fallback behavior.
Why JSON for child-parent payload?
nullSearch order
site-packages -> conda -> already-loaded -> system search -> CUDA_HOME -> subprocess canary probeThe canary still runs only after
CUDA_HOMEto keep existing precedence.Tests
tests/test_ctk_root_discovery.pyto mock subprocess canary resolutionload_with_system_search()is not called during canary probingCUDA_HOMEstill wins over canarypixi run test-pathfinder(129 passed, 1 skipped)Made with Cursor