Skip to content

Releases: mudler/LocalAI

v2.9.0

24 Feb 10:47
ff88c39
Compare
Choose a tag to compare

This release brings many enhancements, fixes, and a special thanks to the community for the amazing work and contributions!

We now have sycl images for Intel GPUs, ROCm images for AMD GPUs,and much more:

  • You can find the AMD GPU images tags between the container images available - look for hipblas. For example, master-hipblas-ffmpeg-core. Thanks to @fenfir for this nice contribution!
  • Intel GPU images are tagged with sycl. You can find images with two flavors, sycl-f16 and sycl-f32 respectively. For example, master-sycl-f16. Work is in progress to support also diffusers and transformers on Intel GPUs.
  • Thanks to @christ66 first efforts in supporting the Assistant API were made, and we are planning to support the Assistant API! Stay tuned for more!
  • Now LocalAI supports the Tools API endpoint - it also supports the (now deprecated) functions API call as usual. We now also have support for SSE with function calling. See #1726 for more
  • Support for Gemma models - did you hear? Google released OSS models and LocalAI supports it already!
  • Thanks to @dave-gray101 in #1728 to put efforts in refactoring parts of the code - we are going to support soon more ways to interface with LocalAI, and not only restful api!

Support the project

First off, a massive thank you to each and every one of you who've chipped in to squash bugs and suggest cool new features for LocalAI. Your help, kind words, and brilliant ideas are truly appreciated - more than words can say!

And to those of you who've been heros, giving up your own time to help out fellow users on Discord and in our repo, you're absolutely amazing. We couldn't have asked for a better community.

Just so you know, LocalAI doesn't have the luxury of big corporate sponsors behind it. It's all us, folks. So, if you've found value in what we're building together and want to keep the momentum going, consider showing your support. A little shoutout on your favorite social platforms using @LocalAI_OSS and @mudler_it or joining our sponsorship program can make a big difference.

Also, if you haven't yet joined our Discord, come on over! Here's the link: https://discord.gg/uJAeKSAGDy

Every bit of support, every mention, and every star adds up and helps us keep this ship sailing. Let's keep making LocalAI awesome together!

Thanks a ton, and here's to more exciting times ahead with LocalAI! 🚀

What's Changed

Bug fixes 🐛

Exciting New Features 🎉

👒 Dependencies

Other Changes

New Contributors

Full Changelog: v2.8.2...v2.9.0

v2.8.2

15 Feb 20:54
e690bf3
Compare
Choose a tag to compare

What's Changed

Bug fixes 🐛

  • fix(tts): fix regression when supplying backend from requests by @mudler in #1713

Full Changelog: v2.8.1...v2.8.2

v2.8.1

15 Feb 09:51
5e155fb
Compare
Choose a tag to compare

This is a patch release, mostly containing minor patches and bugfixes from 2.8.0.

Most importantly it contains a bugfix for #1333 which made the llama.cpp backend to get stuck in some cases where the model starts to hallucinate and fixes to the python-based backends.

Spread the word!

First off, a massive thank you to each and every one of you who've chipped in to squash bugs and suggest cool new features for LocalAI. Your help, kind words, and brilliant ideas are truly appreciated - more than words can say!

And to those of you who've been heros, giving up your own time to help out fellow users on Discord and in our repo, you're absolutely amazing. We couldn't have asked for a better community.

Just so you know, LocalAI doesn't have the luxury of big corporate sponsors behind it. It's all us, folks. So, if you've found value in what we're building together and want to keep the momentum going, consider showing your support. A little shoutout on your favorite social platforms using @LocalAI_OSS and @mudler_it or joining our sponsorship program can make a big difference.

Also, if you haven't yet joined our Discord, come on over! Here's the link: https://discord.gg/uJAeKSAGDy

Every bit of support, every mention, and every star adds up and helps us keep this ship sailing. Let's keep making LocalAI awesome together!

Thanks a ton, and here's to more exciting times ahead with LocalAI! 🚀

What's Changed

Bug fixes 🐛

Exciting New Features 🎉

  • feat(tts): respect YAMLs config file, add sycl docs/examples by @mudler in #1692
  • ci: add cuda builds to release by @sozercan in #1702

Other Changes

Full Changelog: v2.8.0...v2.8.1

v2.8.0

10 Feb 00:29
ef1306f
Compare
Choose a tag to compare

This release adds support for Intel GPUs, and it deprecates old ggml-based backends which are by now superseded by llama.cpp (that now supports more architectures out-of-the-box). See also #1651.

Images are now based on Ubuntu 22.04 LTS instead of Debian bullseye.

Intel GPUs

There are now images tagged with "sycl". There are sycl-f16 and sycl-f32 images indicating f16 or f32 support.

For example, to start phi-2 with an Intel GPU it is enough to use the container image like this:

docker run -e DEBUG=true -ti -v $PWD/models:/build/models -p 8080:8080  -v /dev/dri:/dev/dri --rm quay.io/go-skynet/local-ai:master-sycl-f32-ffmpeg-core phi-2

Note

First off, a massive thank you to each and every one of you who've chipped in to squash bugs and suggest cool new features for LocalAI. Your help, kind words, and brilliant ideas are truly appreciated - more than words can say!

And to those of you who've been heros, giving up your own time to help out fellow users on Discord and in our repo, you're absolutely amazing. We couldn't have asked for a better community.

Just so you know, LocalAI doesn't have the luxury of big corporate sponsors behind it. It's all us, folks. So, if you've found value in what we're building together and want to keep the momentum going, consider showing your support. A little shoutout on your favorite social platforms using @LocalAI_OSS and @mudler_it or joining our sponsorship program can make a big difference.

Also, if you haven't yet joined our Discord, come on over! Here's the link: https://discord.gg/uJAeKSAGDy

Every bit of support, every mention, and every star adds up and helps us keep this ship sailing. Let's keep making LocalAI awesome, together.

Thanks a ton, and here's to more exciting times ahead with LocalAI! 🚀

What's Changed

Exciting New Features 🎉

  • feat(sycl): Add support for Intel GPUs with sycl (#1647) by @mudler in #1660
  • Drop old falcon backend (deprecated) by @mudler in #1675
  • ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1678
  • Drop ggml-based gpt2 and starcoder (supported by llama.cpp) by @mudler in #1679
  • fix(Dockerfile): sycl dependencies by @mudler in #1686
  • feat: Use ubuntu as base for container images, drop deprecated ggml-transformers backends by @mudler in #1689

👒 Dependencies

Other Changes

New Contributors

Full Changelog: v2.7.0...v2.8.0

v2.7.0

29 Jan 09:13
abd678e
Compare
Choose a tag to compare

This release adds support to the transformer backend for LLM as well!

For now instance you can run codellama-7b with transformers with:

docker run -ti -p 8080:8080 --gpus all localai/localai:v2.7.0-cublas-cuda12 codellama-7b

In the quickstart there are more examples available https://localai.io/basics/getting_started/#running-models.

Note: As llama.cpp is ongoing with changes that could possible cause breakage, this release does not includes changes from ggerganov/llama.cpp#5138 (the future versions will).

What's Changed

Bug fixes 🐛

  • fix(paths): automatically create paths by @mudler in #1650

Exciting New Features 🎉

  • feat(transformers): support also text generation by @mudler in #1630
  • transformers: correctly load automodels by @mudler in #1643
  • feat(startup): fetch model definition remotely by @mudler in #1654

👒 Dependencies

Other Changes

Full Changelog: v2.6.1...v2.6.2

v2.6.1

23 Jan 17:22
d5d82ba
Compare
Choose a tag to compare

This is a patch release containing bug-fixes around parallel request support with llama.cpp models.

What's Changed

Bug fixes 🐛

  • fix(llama.cpp): Enable parallel requests by @tauven in #1616
  • fix(llama.cpp): enable cont batching when parallel is set by @mudler in #1622

Exciting New Features 🎉

  • feat(grpc): backend SPI pluggable in embedding mode by @coyzeng in #1621

👒 Dependencies

Other Changes

New Contributors

Full Changelog: v2.6.0...v2.6.1

v2.6.0

20 Jan 17:34
06cd9ef
Compare
Choose a tag to compare

What's Changed

Bug fixes 🐛

  • move BUILD_GRPC_FOR_BACKEND_LLAMA logic to makefile: errors in this section now immediately fail the build by @dionysius in #1576
  • prepend built binaries in PATH for BUILD_GRPC_FOR_BACKEND_LLAMA by @dionysius in #1593

Exciting New Features 🎉

  • minor: replace shell pwd in Makefile with CURDIR for better windows compatibility by @dionysius in #1571
  • Makefile: allow to build without GRPC_BACKENDS by @mudler in #1607
  • feat: 🐍 add mamba support by @mudler in #1589
  • feat(extra-backends): Improvements, adding mamba example by @mudler in #1618

👒 Dependencies

Other Changes

New Contributors

Full Changelog: v2.5.1...v2.6.0

v2.5.1

09 Jan 08:00
5309da4
Compare
Choose a tag to compare

Patch release to create /build/models in the container images.

What's Changed

Other Changes

Full Changelog: v2.5.0...v2.5.1

v2.5.0

08 Jan 13:55
574fa67
Compare
Choose a tag to compare

What's Changed

This release adds more embedded models, and shrink image sizes.

You can run now phi-2 ( see here for the full list ) locally by starting localai with:

docker run -ti -p 8080:8080 localai/localai:v2.5.0-ffmpeg-core phi-2

LocalAI accepts now as argument a list of short-hands models and/or URLs pointing to valid yaml file. A popular way to host those files are Github gists.

For instance, you can run llava, by starting local-ai with:

docker run -ti -p 8080:8080 localai/localai:v2.5.0-ffmpeg-core https://raw.githubusercontent.com/mudler/LocalAI/master/embedded/models/llava.yaml

Exciting New Features 🎉

  • feat: more embedded models, coqui fixes, add model usage and description by @mudler in #1556

👒 Dependencies

  • deps(conda): use transformers-env with vllm,exllama(2) by @mudler in #1554
  • deps(conda): use transformers environment with autogptq by @mudler in #1555
  • ⬆️ Update ggerganov/llama.cpp by @localai-bot in #1558

Other Changes

Full Changelog: v2.4.1...v2.5.0

v2.4.1

06 Jan 00:05
ce724a7
Compare
Choose a tag to compare

What's Changed

Exciting New Features 🎉

  • feat: embedded model configurations, add popular model examples, refactoring by @mudler in #1532

Other Changes

Full Changelog: v2.4.0...v2.4.1