Replies: 3 comments 6 replies
-
Don't use The What you desire cannot be achieved with basic nodes like CheckpointLoaderSimple. This is because it doesn't rely on cache. If you utilize the Shared Checkpoint Loader from the Inspire Pack, once loaded, it can be reused in a cached state. However, there is still a constraint that necessitates the use of the Shared Checkpoint Loader in the workflow. If you intend to use it for very personal purposes, you may override 'CheckpointLoaderSimple' in NODE_CLASS_MAPPINGS to a version that utilizes cache. Also, preloading vram can be done in the |
Beta Was this translation helpful? Give feedback.
-
While this isn't an answer to the method mentioned, if you want to reduce the model loading time, here's a tip for limited environments: |
Beta Was this translation helpful? Give feedback.
-
Hello, is it possible to use either "prestartup_script.py" or a script higher up in a container to start loading a model as soon as the container is opened so that when the workflow is received by the comfyui server the node load model is already loaded in the GPU vram?
The aim, of course, is to shorten a pod's cold-start time.
I found in comfyui that models are loaded with safetensors.torch, is that right?
(for .safetensors only I guess)
Will comfyui not reload the model if it's not aware that the model is already in the vram? if so, how?
Thanks for your answer.
Beta Was this translation helpful? Give feedback.
All reactions