Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

patch unexpectedly ends in middle of line #3957

Open
TwinFinz opened this issue Oct 25, 2024 · 3 comments
Open

patch unexpectedly ends in middle of line #3957

TwinFinz opened this issue Oct 25, 2024 · 3 comments
Labels
bug Something isn't working unconfirmed

Comments

@TwinFinz
Copy link
Contributor

TwinFinz commented Oct 25, 2024

LocalAI version:
localai/localai:latest-gpu-nvidia-cuda-12 : SHA ff0b3e63d517
(Also occurs on v2.22.1 container image)

Environment, CPU architecture, OS, and Version:
Linux server 6.8.0-47-generic #47-Ubuntu SMP PREEMPT_DYNAMIC Fri Sep 27 21:40:26 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
OS Version: Ubuntu 24.04
Portainer: 2.19.5-CE
CPU: intel 13900K
Ram: 32gb
GPU: 4090

Describe the bug
(Suspected issue: "Bugged" LLAMA-CPP builds on later versions)
"builds everything for about 2 hours and ends with this"
Upon reproducing the "bug" it seems to be expecting input from the user.

patching file examples/llava/clip.cpp
patch unexpectedly ends in middle of line
Reversed (or previously applied) patch detected!  Assume -R? [n]

To Reproduce
deploy docker using the following docker-compose.yaml

version: "3.9"
services:
  api:
    image: localai/localai:latest-gpu-nvidia-cuda-12
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8080/readyz"]
      interval: 1m
      timeout: 20m
      retries: 5
    restart: always
    tty: true
    ports:
      - 8080:8080
    environment:
      - TZ=America/New_York
      - NVIDIA_VISIBLE_DEVICES=0
##      - NVIDIA_DRIVER_CAPABILITIES: all
      - DEBUG=true
      - LOCALAI_WATCHDOG_IDLE=true
      - LOCAlAI_WATCHDOG_IDLE_TIMEOUT=30m
      - LOCALAI_WATCHDOG_BUSY=true
      - LOCALAI_WATCHDOG_BUSY_TIMEOUT=30m
      - BUILD_TYPE=cublas
      - LOCALAI_CORS=true
      - LOCALAI_CORS_ALLOW_ORIGINS=*
      - GO_TAGS=stablediffusion
      - REBUILD=true
      - LOCALAI_LOG_LEVEL=debug
      - LOCALAI_IMAGE_PATH=/tmp/generated/images
      - LOCALAI_AUDIO_PATH=/tmp/generated/audio
    volumes:
      - ./models:/build/models:cache
      - ./images/:/tmp/generated/images/
      - ./audio/:/tmp/generated/audio/
    runtime: nvidia
    deploy:
       resources:
         reservations:
           devices:
             - driver: nvidia
##               count: 1
               capabilities: [gpu]
               device_ids: ['0']

Expected behavior
Build and launch of server

Logs
Log Provided by user:

I llama-cpp build info:avx
cp -rf backend/cpp/llama backend/cpp/llama-avx
make -C backend/cpp/llama-avx purge
make[1]: Entering directory '/build/backend/cpp/llama-avx'
rm -rf llama.cpp/build
rm -rf llama.cpp/examples/grpc-server
rm -rf grpc-server
make[1]: Leaving directory '/build/backend/cpp/llama-avx'
CMAKE_ARGS=" -DGGML_AVX=on -DGGML_AVX2=off -DGGML_AVX512=off -DGGML_FMA=off -DGGML_F16C=off" make VARIANT="llama-avx" build-llama-cpp-grpc-server
make[1]: Entering directory '/build'
echo "BUILD_GRPC_FOR_BACKEND_LLAMA is not defined."
BUILD_GRPC_FOR_BACKEND_LLAMA is not defined.
LLAMA_VERSION=96776405a17034dcfd53d3ddf5d142d34bdbb657 make -C backend/cpp/llama-avx grpc-server
make[2]: Entering directory '/build/backend/cpp/llama-avx'
mkdir -p llama.cpp/examples/grpc-server
bash prepare.sh
Applying patch 01-llava.patch
patching file examples/llava/clip.cpp
patch unexpectedly ends in middle of line

Log from v2.22.1 container:

make[2]: Leaving directory '/build/backend/cpp/grpc'
_PROTOBUF_PROTOC=/build/backend/cpp/grpc/installed_packages/bin/proto \
_GRPC_CPP_PLUGIN_EXECUTABLE=/build/backend/cpp/grpc/installed_packages/bin/grpc_cpp_plugin \
PATH="/build/backend/cpp/grpc/installed_packages/bin:/root/.cargo/bin:/opt/rocm/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/root/go/bin:/usr/local/go/bin" \
CMAKE_ARGS="-DLLAMA_F16C=ON -DLLAMA_AVX=ON -DLLAMA_AVX512=OFF -DLLAMA_AVX2=OFF -DLLAMA_FMA=OFF  -Dcpu=x86_64+f16c -DGGML_AVX=on -DGGML_AVX2=off -DGGML_AVX512=off -DGGML_FMA=off -DGGML_F16C=off -Dabsl_DIR=/build/backend/cpp/grpc/installed_packages/lib/cmake/absl -DProtobuf_DIR=/build/backend/cpp/grpc/installed_packages/lib/cmake/protobuf -Dutf8_range_DIR=/build/backend/cpp/grpc/installed_packages/lib/cmake/utf8_range -DgRPC_DIR=/build/backend/cpp/grpc/installed_packages/lib/cmake/grpc -DCMAKE_CXX_STANDARD_INCLUDE_DIRECTORIES=/build/backend/cpp/grpc/installed_packages/include" \
LLAMA_VERSION=45f097645efb11b6d09a5b4adbbfd7c312ac0126 \
make -C backend/cpp/llama-avx grpc-server
make[2]: Entering directory '/build/backend/cpp/llama-avx'
mkdir -p llama.cpp/examples/grpc-server
bash prepare.sh
Applying patch 01-llava.patch
patching file examples/llava/clip.cpp
patch unexpectedly ends in middle of line
Reversed (or previously applied) patch detected!  Assume -R? [n]

Additional context
I am not the one who personally experienced this issue, It is an issue reported in the #Help discord channel (Creating issue as requested) Altho i have reproduced the issue on another image.

@TwinFinz TwinFinz added bug Something isn't working unconfirmed labels Oct 25, 2024
@TwinFinz
Copy link
Contributor Author

TwinFinz commented Nov 15, 2024

Container V2.23.0 contains same issue (assuming a llama-cpp issue)

LocalAi-GPT  | make[2]: Entering directory '/build/backend/cpp/llama-avx'
LocalAi-GPT  | mkdir -p llama.cpp/examples/grpc-server
LocalAi-GPT  | bash prepare.sh
LocalAi-GPT  | Applying patch 01-llava.patch
LocalAi-GPT  | patching file examples/llava/clip.cpp
LocalAi-GPT  | patch unexpectedly ends in middle of line

@sequc82
Copy link

sequc82 commented Nov 27, 2024

Hi @TwinFinz,

I just ran into this same issue trying to start v2.23.0 for the first time. Is there a solution for this, or can you tell me what the last working version is, to pull instead?

Thanks!

@TwinFinz
Copy link
Contributor Author

TwinFinz commented Nov 28, 2024

Hi @TwinFinz,

I just ran into this same issue trying to start v2.23.0 for the first time. Is there a solution for this, or can you tell me what the last working version is, to pull instead?

Thanks!

currently the only solution i know of is to use image v2.20.1
this is the latest version that works for me as i require rebuild. If you do not require rebuild then the fix is to simply disable rebuild.
I believe this is an issue with a change in llama-cpp and not LocalAi but i am not sure if there is a way to "-y" the command to apply patch or not.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working unconfirmed
Projects
None yet
Development

No branches or pull requests

2 participants