-
Notifications
You must be signed in to change notification settings - Fork 27.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a button to prune unused hash #16700
Conversation
for db in ('hashes', 'hashes-addnet', 'safetensors-metadata'): | ||
existing_cache = cache(db) | ||
total_count = len(existing_cache) | ||
with tqdm.tqdm(total=total_count, desc=f'pruning {db}') as progress: | ||
for name in existing_cache: | ||
if '/' not in name: | ||
progress.update(1) | ||
continue | ||
|
||
category, filename = name.split('/', 1) | ||
if category.lower() == 'lora': | ||
exists = file_exists(os.path.join(models_path, 'Lora'), filename) | ||
elif category.lower() == 'checkpoint': | ||
exists = file_exists(os.path.join(models_path, 'Stable-diffusion'), filename) | ||
elif category.lower() == 'textual_inversion': | ||
exists = file_exists(cmd_opts.embeddings_dir, filename) | ||
else: | ||
progress.update(1) | ||
continue | ||
|
||
if not exists: | ||
del existing_cache[name] | ||
progress.update(1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is wrong
-
completely forgot about
hypernet
-
you are assuming that the user is using the default directories which may not be the case, that the path of each model directory is a complicated mess, some models such as Lora can come from multiple directories
-
extensions could also add own cache entries, I don't think there's a way that you can avoid not accidentally removing those entries
as you said
which can reduce file size by a tiny amount, and improve load time by an even tinier amount...
so personally I wouldn't bother makeing it work, not worth the effort
But yeah, not really worth it to deal with the mess that is the current model system... |
Description
Simple Description: When you download/train a model to experiment, the Webui caches the hash for it. Overtime, you will have hundreds if not thousands of hash for models that no longer exist. Therefore, I added a button to clean this up, which can reduce file size by a tiny amount, and improve load time by an even tinier amount...
Summary of Changes:
Prune all unused hash
button in Settings/Actions, which when clicked calls:prune_unused_hash
function incache.py
Checklist
Performance
Checking hundreds of entries took less than a second~