Replies: 1 comment
-
|
The sampler has been completely refactored and now natively supports the |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
HI there,
Thank you for your great work !
I've been playing with your llama-cpp-python to use 'Qwen3VL-8B-Instruct-Q8_0.gguf'.
It works fine, however newer ones than 0.3.23 give me following warnings.
I don't care as far as my system works. Just for information.
Project is here, https://github.com/SwHaraday/local-vlm-rag
Thanks again for your continuing efforts !
Beta Was this translation helpful? Give feedback.
All reactions