12e8c12d2b
When CUDA peer access is enabled, multi-gpu inference will produce garbage output. This is a known bug of llama.cpp (or nvidia). Until the upstream bug is fixed, we can disable CUDA peer access temporarily to ensure correct output. See #961. |
||
---|---|---|
.. | ||
llama.cpp | ||
falcon.go | ||
ggml.go | ||
gguf.go | ||
llama.go | ||
llm.go | ||
starcoder.go | ||
utils.go |