You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am wondering if it’s possible to run a LoRA fine-tuned version of LLaMA 3.2 in the browser using transformers.js. Ideally, I would like to load the base model once and then dynamically load and swap between different LoRA adapters at runtime based on the current task, without reloading the base model each time.
Is this supported in transformers.js? If so, are there any tutorials or examples illustrating how to set this up in a browser environment?
Any guidance or documentation on this would be greatly appreciated. Thank you!
The text was updated successfully, but these errors were encountered:
Hello,
I am wondering if it’s possible to run a LoRA fine-tuned version of LLaMA 3.2 in the browser using transformers.js. Ideally, I would like to load the base model once and then dynamically load and swap between different LoRA adapters at runtime based on the current task, without reloading the base model each time.
Is this supported in transformers.js? If so, are there any tutorials or examples illustrating how to set this up in a browser environment?
Any guidance or documentation on this would be greatly appreciated. Thank you!
The text was updated successfully, but these errors were encountered: