You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey folks, I'm having trouble running on my GPU and I suspect something is setup incorrectly.
I'm on Windows 11 and I can run nvidia-smi fine in the container
However running the docker image I'm not getting the expected output on startup, I'm seeing a warning others have seen but I've not seen a resolution to
docker run -p 8080:8080 -v D:/models:/models -e DEBUG=true --gpus all --name local-ai -ti localai/localai:latest-gpu-nvidia-cuda-12 --models-path /models --threads 12
@@@@@
Skipping rebuild
@@@@@
If you are experiencing issues with the pre-compiled builds, try setting REBUILD=true
If you are still experiencing issues with the build, try setting CMAKE_ARGS and disable the instructions set as needed:
CMAKE_ARGS="-DGGML_F16C=OFF -DGGML_AVX512=OFF -DGGML_AVX2=OFF -DGGML_FMA=OFF"
see the documentation at: https://localai.io/basics/build/index.html
Note: See also https://github.com/go-skynet/LocalAI/issues/288
@@@@@
CPU info:
model name : 12th Gen Intel(R) Core(TM) i7-12700KF
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
CPU: AVX found OK
CPU: AVX2 found OK
CPU: no AVX512 found
@@@@@
5:38PM INF env file found, loading environment variables from file envFile=.env
5:38PM DBG Setting logging to debug
5:38PM INF Starting LocalAI using 12 threads, with models path: /models
5:38PM INF LocalAI version: v2.24.0 (87b7648591573ea59c090b1095ba3073623933ad)
5:38PM DBG CPU capabilities: [3dnowprefetch abm adx aes apic arch_capabilities avx avx2 avx_vnni bmi1 bmi2 clflush clflushopt clwb cmov constant_tsc cpuid cx16 cx8 de ept ept_ad erms f16c flush_l1d fma fpu fsgsbase fsrm fxsr gfni ht hypervisor ibpb ibrs ibrs_enhanced invpcid invpcid_single lahf_lm lm mca mce md_clear mmx movbe movdir64b movdiri msr mtrr nonstop_tsc nopl nx pae pat pcid pclmulqdq pdpe1gb pge pni popcnt pse pse36 rdpid rdrand rdseed rdtscp rep_good sep serialize sha_ni smap smep ss ssbd sse sse2 sse4_1 sse4_2 ssse3 stibp syscall tpr_shadow tsc tsc_adjust tsc_deadline_timer tsc_reliable umip vaes vme vmx vnmi vpclmulqdq vpid waitpkg x2apic xgetbv1 xsave xsavec xsaveopt xsaves xtopology]
WARNING: failed to determine nodes: open /sys/devices/system/node: no such file or directory
WARNING: failed to read int from file: open /sys/class/drm/card0/device/numa_node: no such file or directory
WARNING: failed to determine nodes: open /sys/devices/system/node: no such file or directory
WARNING: error parsing the pci address "vgem"
5:38PM DBG GPU count: 1
5:38PM DBG GPU: card #0 @vgem
this is from running
docker run -p 8080:8080 -v D:/models:/models -e DEBUG=true --gpus all --name local-ai -ti localai/localai:latest-gpu-nvidia-cuda-12 --models-path /models --threads 12
Some output from inside the container
# ls /sys/class/drm/card0/device
driver_override drm modalias subsystem uevent
ls /sys/devices/system
clockevents clocksource container cpu memory
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hey folks, I'm having trouble running on my GPU and I suspect something is setup incorrectly.
I'm on Windows 11 and I can run
nvidia-smi
fine in the containerHowever running the docker image I'm not getting the expected output on startup, I'm seeing a warning others have seen but I've not seen a resolution to
this is from running
Some output from inside the container
Any suggestions to try?
Beta Was this translation helpful? Give feedback.
All reactions