You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to run the below simple demo but I've got an error for Accelerator.
Demo:
from transformers import AutoModelForCausalLM, AutoTokenizer, StoppingCriteria, StoppingCriteriaList
tokenizer = AutoTokenizer.from_pretrained("StabilityAI/stablelm-tuned-alpha-7b")
model = AutoModelForCausalLM.from_pretrained("StabilityAI/stablelm-tuned-alpha-7b")
model.half().cuda()
class StopOnTokens(StoppingCriteria):
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:
stop_ids = [50278, 50279, 50277, 1, 0]
for stop_id in stop_ids:
if input_ids[0][-1] == stop_id:
return True
return False
system_prompt = """<|SYSTEM|># StableLM Tuned (Alpha version)
- StableLM is a helpful and harmless open-source AI language model developed by StabilityAI.
- StableLM is excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user.
- StableLM is more than just an information source, StableLM is also able to write poetry, short stories, and make jokes.
- StableLM will refuse to participate in anything that could harm a human.
"""
prompt = f"{system_prompt}<|USER|>What's your mood today?<|ASSISTANT|>"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
tokens = model.generate(
**inputs,
max_new_tokens=64,
temperature=0.7,
do_sample=True,
stopping_criteria=StoppingCriteriaList([StopOnTokens()])
)
print(tokenizer.decode(tokens[0], skip_special_tokens=True))
Traceback (most recent call last):
File "/packages/miniconda/envs/user/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1146, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "/packages/miniconda/envs/user/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/packages/miniconda/envs/user/lib/python3.10/site-packages/transformers/models/gpt_neox/modeling_gpt_neox.py", line 32, in <module>
from ...modeling_utils import PreTrainedModel
File "/packages/miniconda/envs/user/lib/python3.10/site-packages/transformers/modeling_utils.py", line 83, in <module>
from accelerate import __version__ as accelerate_version
File "/home/user/.local/lib/python3.10/site-packages/accelerate/__init__.py", line 3, in <module>
from .accelerator import Accelerator
ImportError: cannot import name 'Accelerator' from 'accelerate.accelerator'
These are the packages in this environment:
Accelerate 0.19.0
Python 3.10.10
Pytorch 2.0.0
Transformers 4.28.1
I've double-checked the accelerate library and it is correctly installed. Can anyone please share what version of your libraries and let me know what could be the problem?
Thank you!
The text was updated successfully, but these errors were encountered:
It looks like your conda env is it a bit mixed up - you're running miniconda in env user, but accelerate is instead in your system's main python install instead of the env.
Thank you so much for the good catch. The path is fixed and now it's running.
However, I notice that it often fills up the CUDA memory really quickly even right after the model checkpoint is loaded. I'm running it on an A100 GPU, which shouldn't be. Any insight into this matter is much appreciated!
Hello,
I'm trying to run the below simple demo but I've got an error for Accelerator.
Demo:
===========================================================================
Error:
===========================================================================
These are the packages in this environment:
Accelerate 0.19.0
Python 3.10.10
Pytorch 2.0.0
Transformers 4.28.1
I've double-checked the accelerate library and it is correctly installed. Can anyone please share what version of your libraries and let me know what could be the problem?
Thank you!
The text was updated successfully, but these errors were encountered: