You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Applying attention optimization: Doggettx... done.
Model loaded in 10.2s (load weights from disk: 0.7s, create model: 0.5s, apply weights to model: 7.7s, apply fp8: 0.7s, calculate empty prompt: 0.4s).
To create a public link, set`share=True`in`launch()`.🤯 LobeTheme: Initializing...Startup time: 65.8s (prepare environment: 30.9s, import torch: 4.5s, import gradio: 1.2s, setup paths: 2.9s, initialize shared: 0.5s, other imports: 0.5s, load scripts: 7.0s, create ui: 1.2s, gradio launch: 16.9s).Reusing loaded model v1-5-pruned-emaonly.safetensors [6ce0161689] to load sd3_medium.safetensors [cc236278d2]Loading weights [cc236278d2] from G:\sd\stable-diffusion-webui-1.10.0\models\Stable-diffusion\sd3_medium.safetensorsCreating model from config: G:\sd\stable-diffusion-webui-1.10.0\configs\sd3-inference.yamlcreating model quickly: TypeErrorTraceback (most recent call last): File "D:\python\lib\threading.py", line 973, in _bootstrapself._bootstrap_inner() File "D:\python\lib\threading.py", line 1016, in _bootstrap_innerself.run() File "G:\sd\stable-diffusion-webui-1.10.0\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run result = context.run(func, *args) File "G:\sd\stable-diffusion-webui-1.10.0\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper response = f(*args, **kwargs) File "G:\sd\stable-diffusion-webui-1.10.0\modules\ui_settings.py", line 316, in<lambda> fn=lambda value, k=k: self.run_settings_single(value, key=k), File "G:\sd\stable-diffusion-webui-1.10.0\modules\ui_settings.py", line 95, in run_settings_singleif value is None or not opts.set(key, value): File "G:\sd\stable-diffusion-webui-1.10.0\modules\options.py", line 165, insetoption.onchange() File "G:\sd\stable-diffusion-webui-1.10.0\modules\call_queue.py", line 14, in f res = func(*args, **kwargs) File "G:\sd\stable-diffusion-webui-1.10.0\modules\initialize_util.py", line 181, in<lambda> shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False) File "G:\sd\stable-diffusion-webui-1.10.0\modules\sd_models.py", line 977, in reload_model_weights load_model(checkpoint_info, already_loaded_state_dict=state_dict) File "G:\sd\stable-diffusion-webui-1.10.0\modules\sd_models.py", line 820, in load_model sd_model = instantiate_from_config(sd_config.model, state_dict) File "G:\sd\stable-diffusion-webui-1.10.0\modules\sd_models.py", line 775, in instantiate_from_configreturn constructor(**params) File "G:\sd\stable-diffusion-webui-1.10.0\modules\models\sd3\sd3_model.py", line 34, in __init__ self.text_encoders = SD3Cond() File "G:\sd\stable-diffusion-webui-1.10.0\modules\models\sd3\sd3_cond.py", line 164, in __init__ self.tokenizer = SD3Tokenizer() File "G:\sd\stable-diffusion-webui-1.10.0\modules\models\sd3\other_impls.py", line 221, in __init__ self.t5xxl = T5XXLTokenizer() File "G:\sd\stable-diffusion-webui-1.10.0\modules\models\sd3\other_impls.py", line 317, in __init__super().__init__(pad_with_end=False, tokenizer=T5TokenizerFast.from_pretrained("google/t5-v1_1-xxl"), has_start_token=False, pad_to_max_length=False, max_length=99999999, min_length=77) File "G:\sd\stable-diffusion-webui-1.10.0\venv\lib\site-packages\transformers\tokenization_utils_base.py", line 1825, in from_pretrainedreturn cls._from_pretrained( File "G:\sd\stable-diffusion-webui-1.10.0\venv\lib\site-packages\transformers\tokenization_utils_base.py", line 1988, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "G:\sd\stable-diffusion-webui-1.10.0\venv\lib\site-packages\transformers\models\t5\tokenization_t5_fast.py", line 133, in __init__super().__init__( File "G:\sd\stable-diffusion-webui-1.10.0\venv\lib\site-packages\transformers\tokenization_utils_fast.py", line 114, in __init__ fast_tokenizer = convert_slow_tokenizer(slow_tokenizer) File "G:\sd\stable-diffusion-webui-1.10.0\venv\lib\site-packages\transformers\convert_slow_tokenizer.py", line 1307, in convert_slow_tokenizerreturnconverter_class(transformer_tokenizer).converted() File "G:\sd\stable-diffusion-webui-1.10.0\venv\lib\site-packages\transformers\convert_slow_tokenizer.py", line 445, in __init__ from .utils import sentencepiece_model_pb2 as model_pb2 File "G:\sd\stable-diffusion-webui-1.10.0\venv\lib\site-packages\transformers\utils\sentencepiece_model_pb2.py", line 91, in<module> _descriptor.EnumValueDescriptor( File "G:\sd\stable-diffusion-webui-1.10.0\venv\lib\site-packages\google\protobuf\descriptor.py", line 920, in __new___message.Message._CheckCalledFromGeneratedFile()TypeError: Descriptors cannot be created directly.If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.If you cannot immediately regenerate your protos, some other possible workarounds are: 1. Downgrade the protobuf package to 3.20.x or lower. 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updatesFailed to create model quickly; will retry using slow method.changing setting sd_model_checkpoint to sd3_medium.safetensors [cc236278d2]: TypeErrorTraceback (most recent call last): File "G:\sd\stable-diffusion-webui-1.10.0\modules\options.py", line 165, insetoption.onchange() File "G:\sd\stable-diffusion-webui-1.10.0\modules\call_queue.py", line 14, in f res = func(*args, **kwargs) File "G:\sd\stable-diffusion-webui-1.10.0\modules\initialize_util.py", line 181, in<lambda> shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False) File "G:\sd\stable-diffusion-webui-1.10.0\modules\sd_models.py", line 977, in reload_model_weights load_model(checkpoint_info, already_loaded_state_dict=state_dict) File "G:\sd\stable-diffusion-webui-1.10.0\modules\sd_models.py", line 829, in load_model sd_model = instantiate_from_config(sd_config.model, state_dict) File "G:\sd\stable-diffusion-webui-1.10.0\modules\sd_models.py", line 775, in instantiate_from_configreturn constructor(**params) File "G:\sd\stable-diffusion-webui-1.10.0\modules\models\sd3\sd3_model.py", line 34, in __init__ self.text_encoders = SD3Cond() File "G:\sd\stable-diffusion-webui-1.10.0\modules\models\sd3\sd3_cond.py", line 164, in __init__ self.tokenizer = SD3Tokenizer() File "G:\sd\stable-diffusion-webui-1.10.0\modules\models\sd3\other_impls.py", line 221, in __init__ self.t5xxl = T5XXLTokenizer() File "G:\sd\stable-diffusion-webui-1.10.0\modules\models\sd3\other_impls.py", line 317, in __init__super().__init__(pad_with_end=False, tokenizer=T5TokenizerFast.from_pretrained("google/t5-v1_1-xxl"), has_start_token=False, pad_to_max_length=False, max_length=99999999, min_length=77) File "G:\sd\stable-diffusion-webui-1.10.0\venv\lib\site-packages\transformers\tokenization_utils_base.py", line 1825, in from_pretrainedreturn cls._from_pretrained( File "G:\sd\stable-diffusion-webui-1.10.0\venv\lib\site-packages\transformers\tokenization_utils_base.py", line 1988, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "G:\sd\stable-diffusion-webui-1.10.0\venv\lib\site-packages\transformers\models\t5\tokenization_t5_fast.py", line 133, in __init__super().__init__( File "G:\sd\stable-diffusion-webui-1.10.0\venv\lib\site-packages\transformers\tokenization_utils_fast.py", line 114, in __init__ fast_tokenizer = convert_slow_tokenizer(slow_tokenizer) File "G:\sd\stable-diffusion-webui-1.10.0\venv\lib\site-packages\transformers\convert_slow_tokenizer.py", line 1307, in convert_slow_tokenizerreturnconverter_class(transformer_tokenizer).converted() File "G:\sd\stable-diffusion-webui-1.10.0\venv\lib\site-packages\transformers\convert_slow_tokenizer.py", line 445, in __init__ from .utils import sentencepiece_model_pb2 as model_pb2 File "G:\sd\stable-diffusion-webui-1.10.0\venv\lib\site-packages\transformers\utils\sentencepiece_model_pb2.py", line 28, in<module> DESCRIPTOR = _descriptor.FileDescriptor( File "G:\sd\stable-diffusion-webui-1.10.0\venv\lib\site-packages\google\protobuf\descriptor.py", line 1228, in __new__return _message.default_pool.AddSerializedFile(serialized_pb)TypeError: Couldn't build proto file into descriptor pool: duplicate file name sentencepiece_model.proto
Additional information
电脑安装了forgeui
The text was updated successfully, but these errors were encountered:
Checklist
What happened?
webui切换大模型 不成功,只能使用 安装的时候 自带的大模型
Steps to reproduce the problem
1、Stable Diffusion 模型 选择 sd3_medium.safetensors 模型,处理几秒后,会自动切回v1-5-pruned-emaonly.safetensors 这个模型
What should have happened?
Stable Diffusion 模型切换不成功
What browsers do you use to access the UI ?
Google Chrome
Sysinfo
sysinfo-2024-12-23-12-29.json
Console logs
Additional information
电脑安装了forgeui
The text was updated successfully, but these errors were encountered: