-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mac os native build not working #1560
Comments
Interesting, I'm getting the same error as well with multiple build attempts. LocalAI version: Environment, CPU architecture, OS, and Version: I've tried using Docker to build this after cloning the repo and it shows "unable to locate package conda" even though the build instructions do specify conda as a required dependency. I also ran a variety of
|
@themeaningofmeaning: Unfortunately I get the same error =(. Any news on this? LocalAI version: Environment, CPU architecture, OS, and Version: Logs
|
@themeaningofmeaning i have the same error with docker. Should we wait help from author? |
@glebnaz @muellest - I haven't been able to figure out a solution as it appears to be an issue with the release and Apple Silicon. I haven't tried installing conda or Miniconda before doing a Docker build....that would be one thing to test, but I think we need to wait to hear from the author. I've tried every build in the book and it's not working on the M1 |
Perhaps this is an alternative to using Docker: https://localai.io/basics/build/ (not tested yet). |
The following patch allows me to compile on the Apple Silicon platform. However, the error messages I encountered during compilation are different from @glebnaz , so please take them as reference only. diff --git a/Makefile b/Makefile
index 6afc644..4414eaa 100644
--- a/Makefile
+++ b/Makefile
@@ -112,7 +112,7 @@ ifeq ($(BUILD_TYPE),hipblas)
endif
ifeq ($(BUILD_TYPE),metal)
- CGO_LDFLAGS+=-framework Foundation -framework Metal -framework MetalKit -framework MetalPerformanceShaders
+ CGO_LDFLAGS+=-framework Foundation -framework Metal -framework MetalKit -framework MetalPerformanceShaders -framework CoreML
export LLAMA_METAL=1
export WHISPER_METAL=1
endif |
@gaoyifan For me your patch doesn't help. go mod edit -replace github.com/nomic-ai/gpt4all/gpt4all-bindings/golang=/Users/glebnaz/Documents/#main/workspace/localai/sources/gpt4all/gpt4all-bindings/golang
go mod edit -replace github.com/go-skynet/go-ggml-transformers.cpp=/Users/glebnaz/Documents/#main/workspace/localai/sources/go-ggml-transformers
go mod edit -replace github.com/donomii/go-rwkv.cpp=/Users/glebnaz/Documents/#main/workspace/localai/sources/go-rwkv
go mod edit -replace github.com/ggerganov/whisper.cpp=/Users/glebnaz/Documents/#main/workspace/localai/sources/whisper.cpp
go mod edit -replace github.com/ggerganov/whisper.cpp/bindings/go=/Users/glebnaz/Documents/#main/workspace/localai/sources/whisper.cpp/bindings/go
go mod edit -replace github.com/go-skynet/go-bert.cpp=/Users/glebnaz/Documents/#main/workspace/localai/sources/go-bert
go mod edit -replace github.com/mudler/go-stable-diffusion=/Users/glebnaz/Documents/#main/workspace/localai/sources/go-stable-diffusion
go mod edit -replace github.com/M0Rf30/go-tiny-dream=/Users/glebnaz/Documents/#main/workspace/localai/sources/go-tiny-dream
go mod edit -replace github.com/mudler/go-piper=/Users/glebnaz/Documents/#main/workspace/localai/sources/go-piper
go mod download
touch prepare-sources
touch prepare
CGO_LDFLAGS=" -lcblas -framework Accelerate -framework Foundation -framework Metal -framework MetalKit -framework MetalPerformanceShaders -framework CoreML" C_INCLUDE_PATH=/Users/glebnaz/Documents/#main/workspace/localai/sources/go-ggml-transformers LIBRARY_PATH=/Users/glebnaz/Documents/#main/workspace/localai/sources/go-ggml-transformers \
go build -ldflags " -X "github.com/go-skynet/LocalAI/internal.Version=v2.4.1-2-g62a02cd" -X "github.com/go-skynet/LocalAI/internal.Commit=62a02cd1feb7cf8a75bcbe253fb95f204f022c1f"" -tags "" -o backend-assets/grpc/falcon-ggml ./backend/go/llm/falcon-ggml/
backend/go/llm/transformers/dolly.go:11:2: falcon.go: malformed #cgo argument: -I/Users/glebnaz/Documents/#main/workspace/localai/sources/go-ggml-transformers/ggml.cpp/include/
make: *** [backend-assets/grpc/falcon-ggml] Error 1 but thank you for your comment |
Anything new on this topic? - Are there any identified workarounds or solutions? @mudler , could you please confirm whether this can be recognized as a confirmed bug? |
@mudler do you have time to participate to solution? |
I don't have a Mac (yet) to double check, but the CI doesn't show the same symptoms. Actually the falcon-ggml backend is a candidate for deprecation (see: #1126 for more context). Maybe @dave-gray101 can chime in here, I think you fixed MacOS builds lately right? However, in any case I think you could workaround that but selectively choose what backend to build during build time, for instance to build only
see also: https://localai.io/basics/build/#build-only-a-single-backend |
@dave-gray101 Can you support/help us? |
I am reproducable able to build 2.7 with EDIT:
|
I was able to compile today with tts,stablediffusion and tinydream backend. It builds but it's a bit of a mess. I made a PR for one issue but I'm not sure where to implement the remaining fixes. |
Please consider adding Core ML model package format support to utilize Apple Silicone Nural Engine + GPU. List of Core ML package format models |
Here is some additional info about running LLMs locally on Apple Silicone. Core ML is a framework that can redistribute workload across CPU, GPU & Nural Engine (ANE). ANE is available on all modern Apple Devices: iPhones & Macs (A14 or newer and M1 or newer). Ideally, we want to run LLMs on ANE only as it has optimizations for running ML tasks compared to GPU. Apple claims "deploying your Transformer models on Apple devices with an A14 or newer and M1 or newer chip to achieve up to 10 times faster and 14 times lower peak memory consumption compared to baseline implementations".
https://machinelearning.apple.com/research/neural-engine-transformers |
You might also be interested in another implementation Swift Transformers. Work in progress on CoreML implementation for [whisper.cpp]. They see x3 performance improvements for some models. (ggerganov/whisper.cpp#548) you might be interested in. Example of CoreML application |
I'm getting a different error when trying to follow the Build on Mac instructions, it seems like the
This is my system report:
|
just using so that issue is resolved? |
LocalAI version:
v2.4.1
Environment, CPU architecture, OS, and Version:
MBP 14 M1 PRO
Describe the bug
Not working make build and make BUILD_TYPE=metal build
To Reproduce
run in local make BUILD_TYPE=metal build (like in instruction)
Expected behavior
Error
Logs
Additional context
The text was updated successfully, but these errors were encountered: